THE DRAWBOTS

1 03 2009

By PAUL BROWN, BILL BIGGE, JON BIRD, PHIL HUSBANDS, MARTIN PERRIS, DUSTIN STOKES

INTRODUCTION

In 2005 an international, multi-disciplinary, inter-institutional group of researchers began a three-year research project that is attempting to use evolutionary and adaptive systems methodology (genetic algorithms, neural networks, etc…) to make an embodied robot that can exhibit creative behaviour by making marks or drawing (in the most general sense). The research is popularly known as the DrawBots Project. The research group is composed of computer and cognitive scientists, philosophers, artists, art theorists and historians. One outcome of the project will be a large-scale art installation of a group of DrawBots. Other outcomes will include the various research publications reflecting the vested interests of the group both as independent researchers and as a group.

There are a number of motivations for the project including the production of machine-created art and the exploration of whether it is possible to develop (minimally) creative artificial agents and the research has two, mutually dependent, contextual frameworks. One concerns methodologies for making an agent that has the potential for manifesting autonomous creative behaviour. The second concerns methodologies for recognising such behaviour. Another emphasis is attempting to place this work in an art historical context. Amongst the key concepts that the project is examining are: personality, autonomy, value, signature, purpose, novelty, embodiment, social context, environmental interaction, ownership and so on…

This paper forms part of the research supported by UK AHRC Grant no. B/RG/AN8285/APN19307: “Computational Intelligence, Creativity, and Cognition: A Multidisciplinary Investigation”.

CONTEXT AND BACKGROUND

Artistic precedence for creative autonomy appears in the mid- to late-20th century with works like Nicolas Schöffer’s CYSP 1 (1956) and Edward Ihnatowicz’ Senster (1970). Writing about CYSP 1 Schöffer said … “Spatiodynamic sculpture, for the first time, makes it possible to replace man with a work of abstract art, acting on its own initiative, which introduces into the show world a new being whose behaviour and career are capable of ample developments” (Olats 2007). Ihnatowicz was aware of the work of the developmental psychologist Jean Piaget and his robotic artworks express his belief that machines would never attain intelligence until they learned to interact with their environment (Brown 2008). Although at the time this was an unpopular approach within the AI discipline – which was dominated by top-down ideology – Ihnatowicz laid down an important foundation for future embodied research and in recent years his pioneering artworks have been recognised as an important root of the current interest in bottom-up AI and the scientific discipline of Artificial Life. He was a friend of the co-author (Paul Brown) and his work and ideas have directly influenced the DrawBots project as have the words of Jack Burnham, who, in “Beyond Modern Sculpture” (1968), suggested that the future for art was the production of “life-simulation systems”.

In addition to the works above, which roughly fit into the Kinetics framework several other important critical agendas emerged in the 1960’s. These included the precedence of process over object; interaction; conceptual art; systems art and many others (Lippard 1973). The computational metamedium – as Kay later named it (1984) – provided an exceptional opportunity for the investigation of many of these ideas and during the 1960’s and ‘70’s the digital computer and programming languages were adopted by a number of artists who wishes to explore these emerging agendas. In particular Paul Brown developed a process using cellular automata with the intention of producing artworks that could display autonomy, self-determination and that would transcend the artist’s personal signature. Over the following three decades this approach led to an interesting and fruitful body of work but one aim in particular proved elusive. Brown’s assumption that the use of text-based symbolic languages (in contrast to “intimate” tools like brushes and pencil) could effectively subsume signature proved, in retrospect, to be hopelessly naive. In 2000, as artist-in-residence at the Centre for Computational Neuroscience and Robotics (CCNR) at the University of Sussex (where the current research project is based) he revisited this ambition. If it wasn’t possible to design a signature-free artwork/process then could one be “evolved”? The DrawBots project begins here.

MINIMAL CREATIVITY

Our assumptions about creativity are minimal. We start with only two conditions, each of them necessary but non-sufficient, for creativity. A creative behaviour must result from agency. Agency requires autonomy. Our sense of the term does not require, as the philosophical sense does, intentionality, deliberation, or cognition. It simply requires behaviour that is not imposed by an external agent or programmer. A remote controlled robot would thus not qualify, while many of the systems that populate evolutionary robotics would. We sometimes refer to this as ‘no strings attached agency.’

Intuitions also tell us that novelty is a condition for creativity: creative artefacts or processes are novel ones. Here too we err towards barely minimal assumptions. Following Boden (2004), we distinguish absolute and relative forms of novelty. As Boden argues, relative novelty is sometimes as theoretically interesting as absolute novelty. For example, one may have a novel thought which, although others have had it before, is novel relative to one’s own mind. We broaden the latter—which is what Boden calls ‘psychological’ novelty—to include non-cognitive behaviours. This is done in two ways. A behaviour of some agent R may be novel relative to the behavioural history of R. Or a behaviour of some agent R may be novel relative to a population of which R is a member. Call the first ‘individual-relative novelty’; call the second ‘population-relative novelty.’

The choice for conceptualising agency and novelty so thinly is motivated both by our particular research goals and a general methodological assumption we share with much of cognitive science. Our interest is to see what lessons can be learned about creativity and cognition though the use of synthetic, bottom-up modelling techniques. We may, after all, be working with overly thin notions, but the working supposition that weaker instances of agency and novelty may be near what’s necessary for creative behaviour enables fruitful experimentation and hypothesis generation. This is often how cognitive scientists begin, that is, by asking what might some minimal conditions be for some phenomenon, and what can we learn from attempting to satisfy just those conditions?

The agency and novelty conditions give us two necessary conditions for creativity. The weakness of this definition is easy to see. I can right now place my head in the oven and utter ‘We need milk, butter, and bread.’ This is novel behaviour for me, and indeed behaviour that depends upon my autonomy. But would anyone count it creative? Novelty and agency, even of a very rich, cognitive sort, are thus not enough for creativity. We recognize that, and are trying to determine what is enough, even minimally. Nonetheless, we have to this point been proceeding with this incomplete definition in hand: creativity requires agency and novelty. So if we are going to build creative systems, we at least have to build systems that possess these two properties. This, as it turns out, is hard enough as a start. For a fuller discussion of these issues see Bird and Stokes (2006a,b, 2007) and Bird et al. (2007).

EVOLUTIONARY ROBOTICS

The main synthetic, bottom-up methods used in the project are those of Evolutionary Robotics (ER). ER is a biologically inspired approach to the automatic design of autonomous robots (Cliff et al. 1993, Nolfi and Floreano 2000). The field encompasses a wide range of work where one or more (sometimes all) of the following aspects of robot design are in the hands of artificial evolution: the control system, the overall body morphology, sensor and actuator properties. Populations of artificial genomes (lists of characters and numbers) encode the properties of the robot under evolutionary control. The genomes are mutated and interbred, creating new generations of robots, according to a Darwinian scheme in which the fittest individuals are most likely to produce offspring. Fitness is measured in terms of how good a robot’s behaviour is according to some evaluation criteria; this is usually automatically measured but may, in the manner of 18th century pig breeders, be based on the experimenters’ judgment. Fitness is tested either in simulation, in the real world or using a combination of the two. Typically some form of artificial neural network acts as the nervous system of the robot; properties of the network will invariably be evolved even if other aspects of the robot design are not.

Over the last decade or so, ER has been successfully applied to the design of many autonomous robots (Floreano et al. 2007). It is a discovery methodology that is free to exploit any constraints arising from the interaction of components in the controller and between the robot and environment, even when the human experimenter is not aware of them. This can potentially produce simpler, more robust robots than conventional design. It can also produce robots that exhibit unpredictable, and thus potentially novel, behaviour. Another feature of ER suitable to our minimal approach to creativity is that it provides a means of generating novel designs that can, to some extent, overcome inductive bias. This is the phenomenon where the explicit and implicit biases of an experimenter constrain the possible space of designs that is explored. By artificially evolving control architectures from suitably low level primitives, the final controller “need not be tightly restricted by human designers’ prejudices” [Cliff et al. 1993, p.83]. ER therefore has the potential to produce models of minimal creativity not dominated by our preconceptions, case studies, or, perhaps mistaken, theories of creativity. By the same token, it has the potential to produce autonomous art-making machines that are not constrained by the artist’s (systems designer’s) prejudices or ‘signature’. Finally, artificially evolving neural networks as robot controllers allows for open-ended evolution, since their architecture can be incrementally increased in complexity by adding processing units and connections (Husbands et al. 1997). Thus, even if robot behaviour does not exhibit relative novelty at early stages in experimentation, incremental increases in neural complexity may well make the relevant difference.

EXPLORATIONS IN SIMULATION

Initial experiments were carried out in simulation using an accurate model of a Khepera robot, a standard ER platform (Mondada et al. 1993), augmented with a drawing pen placed between its drive wheels (see Figure 2 for pictures of a physical robot used in later work which employs this same arrangement). In the simulation, each robot controller was a neural network consisting of six motor neurons (two for each of the left wheel, right wheel and pen position – up or down – motors). At each time step in the simulation, the most strongly activated neuron of each motor pair controlled its associated actuator. The robot has seven sensors (six frontal IR sensors and one line detector positioned under the robot that can detect marks made by the pen). Each of the seven sensors was connected to each of the six motor neurons. A genetic algorithm was used to determine the strength of each of these connections and the bias of each of the motor neurons. An initial population of 100 robots controllers (phenotypes) was encoded as a string of 0s and 1s (genotypes). Every generation each genotype was decoded and the performance of the robot controller tested and assigned a fitness value. A new generation of genotypes was then generated by randomly selecting genotypes, with a bias towards fitter ones, and mutating them (flipping 0s to 1s or 1s to 0s with a probability of 0.01 per gene). Our experiments were carried out for 600 generations.

We aim for fitness functions that minimise our influence on the resulting robot behaviour. In these early experiments, rather than specifying the types of marks that a robot should make, we rewarded controllers that correlated the changes in state of their line detector and pen position. For example, if a line is detected and the robot’s pen is then raised or lowered within a short time window, the robot accumulates fitness. This fitness function resulted in robots that followed the walls and made marks around the edge of the arena (Figure 1). When the fitness function also rewards robots for making marks over the whole area of the arena then different behaviours evolve (Figure 1 right) and robots turn away from the walls at angles and mark the central parts of the arena as well. In all our experiments crashing into walls is implicitly penalised by stopping the evaluation and thereby giving the robots less time to accumulate fitness. It is important to note that although we, via the fitness function, evaluate the mark making behaviour of the robots, the robots themselves do not assess the marks that they have made. See Bird and Stokes (2006a,b) for further details.

1 Figure 1: Left – a high fitness individual from an initial experiment as described above– it does an initial loop of the arena with its pen down and on the second loop makes line segments parallel to the line it initially made by moving the pen up and down as it detects the original line. A segment of the pattern produced is blown up in the centre. Right – when the fitness function rewards making marks over the whole arena, the robots no longer follow the walls but turn away from them at angles and mark more central regions.

A number of common criticisms of these preliminary experiments fall under the broad category of value. The robots neither evaluate the process nor the product of mark making. The robots can only detect the presence of a mark in a 2mm x 2mm region underneath them. How could an agent, from such a ‘myopic’ viewpoint, have any sense of the global pattern of the marks made across a large arena? And how could such agents have any sense of when to stop? It is possible that some kind of value condition may be what’s needed for a complete analysis of creativity. That is, perhaps value plus agency and novelty will get you creativity. Without committing to this notion, we accept that value is a good general area to mine in the search for richer models of creativity, and, perhaps, for more interesting robot drawings. An ongoing strand of work is addressing the concerns about value by extending our robotics framework.

The direction we are currently exploring revolves around fractals. Fractals, understood broadly, are patterns that display self-similarity at different magnifications (Mandelbrot 1982). We are using them in the following ways. Endow the agents with a ‘fractal detector.’ Endow the agents further with a ‘fractal preference’, such that they will acquire fitness for making fractal patterns on the arena surface (see Bird and Stokes [2006a, 2007] for a description of how this is done). In this approach, the myopia of our earlier agents is partly cured by allowing them to see a larger area of the drawing surface, but, since they have not been given global viewpoint, mainly by taking advantage of the nature of fractal patterns. If a region of marked surface that the agent can see isn’t self-similar, then a fit agent will detect this and add marks to make it self-similar. The agent thus has something to look for and a preference for making things that look a certain way. This capacity is admittedly not a sophisticated aesthetic or artistic one. But it is an evaluation technique, which results in the agent making choices: it will prefer some marks over others, and will change some and leave others. Moreover, fractals are a broad enough pattern category that the agents have considerable freedom in the marks they can make. The fractal framework allows for a natural completion criteria: at some point the drawing surface will be covered with a self-similar pattern and the robot will no longer add any more marks.

Fractal patterns seem to have a natural aesthetic appeal and, because the range of self-similar patterns that the robots can produce is potentially very broad, the resulting marks may surprise us. An element of surprise is an aesthetic merit, and thus a potential benefit of the fractal framework.

THE DrawBot ROBOT

While the initial simulation experiments suggest the viability of the approach, the intention was always to develop drawing behaviours on real robots. The current physical manifestation of the DrawBot is illustrated in Figure 2. The main body of the robot forms a ring with an outer diameter of approximately 329mm and an inner diameter of 165mm. The main drive wheels are on each side with supporting castors at the front and back.

2 The primary navigational sensors consist of eight Infra Red rangefinders and eight tactile switches each distributed equally around the body of the robot. The eight tactile switches are grouped in pairs with each pair operated by a single bumper such that contact at one end of the bumper will activate only one switch whist contact in the centre will operate both. The bumpers extend from a few millimetres from the floor to the full height of the robots main body, approximately 75mm from the floor, and the rangefinders are mounted inside the body pointing out through holes in the bumpers (see Figure 2). In addition to the distance and contact sensors each wheel is equipped with a rotary encoder producing 200 counts per revolution or approximately 1 count per 1.3mm of movement along the floor.

This arrangement of sensors and body shape produces a robot with both side-to-side and front to back symmetry and, as such, the robot has no explicit ‘front’ or ‘back’. This symmetry, along with the narrow beam rangefinders produces a platform that is easy to reproduce in a two dimensional simulation.

The ring shaped body was chosen in order to keep the central portion of the robot completely clear of structure. This allows us to mount a variety of drawing implements in the centre of the robot, along with a variety of floor sensors to detect marks on the drawing surface. The current version uses a penholder mounted on an arm that allows the pen to be lifted off the drawing surface.

Local control for the robot, along with the power supply, is mounted on a platform above the ring shaped body at a height of approximately 200mm. The robot is not designed to operate with on board autonomy and instead incorporates a Bluetooth based radio modem and a serial control protocol allowing the robot to be controlled by a host PC. The on board processing is provided by a PIC microcontroller which can respond to commands from the host PC and reply with sensory and other data. As well as controlling the main sensor and drive functions of the robot the on board controller is capable of controlling a drawing implement controller with up to four degrees of freedom and eight analogue sensors. The robot has a limited autonomy mode where it will subsume commands from the host PC in order to avoid obstacles.

For simple sensing of the drawing surface an eight point linear light sensor can be interfaced to the control board to provide information about the area directly ahead of the pen. In the near future video cameras will be added both as more powerful drawing surface sensors and, more generally, as distal sensors allowing a richer set of interactions between the robot and its environment. The robot is deliberately constructed to allow easy addition of new sensors as needed.

FROM SIMULATION TO REALITY

This section outlines initial successful results in transferring behaviours evolved in simulation on to the real robot. A simulation of the robot described in the previous section was developed using Jakobi’s (1998) minimal simulation methodology whereby the robot dynamics and its interactions with the environment are modelled in a relatively simple way but a large amount of structured noise is used to force robust generalised behaviours that will cross over into reality despite the discrepancies between the simulation and the world. Minimal simulations are computationally very efficient allowing relatively fast evolution of robot controllers.

The controllers evolved in the experiment briefly described below were Continuous Time Recurrent Neural Networks (CTRNNs) (Beer et al. 1989), a rather more complex network than those used in the earlier experiments described above. The activation of a node, y, is defined by the following differential equation:

Eqn.1 3

Where is a time constant, w a connection weight, g a gain, θ a bias, I an external input and the sigmoid function shown in Equation 2. Node outputs are calculated as (g(y + θ)).

Eqn. 2 4

The networks consisted of forty fully connected nodes. The connection weights, time constants, biases and gains were encoded as a string of real numbers in the range [0,1]. The weights, biases and gains were linearly scaled to values in the range [-30,30], [-4,4] and [1,31] respectively. The time constant was calculated as 10x where x = (-0.6 + 3×a) and a is the value encoded in the genotype. The state of each neuron was initially set to a random value equal to the bias for that neuron ± 2. Ten of the neurons had external inputs from the sensors, three neurons acted as motor outputs, one for each wheel and one to lower and raise the pen. For full details see Perris (2007).

A steady state genetic algorithm (see e.g. Eiben and Smith 2003) was used with a population of size 30 and run for 3000 generations. Offspring were created by making mutated copies of parents selected by a tournament method.

5 Figure 3 shows the results from an experiment with this setup. The aim was to explore indirect fitness functions that did not involve specific drawing elements but had a more ‘ecological’ feel. Such indirect fitness functions are an interesting direction complementary to the other experimental strands described above. In the simulation small circular pieces of ‘food’ were randomly scattered in a rectangular area of the arena (represented by the inner rectangle in the central image in Figure 3 – ‘food’ not shown). Fitness was gained when a line drawn by the pen intersected one of the food particles. However, each robot started with a fixed amount of energy which was used up at a constant rate while the pen was down but not while it was up; the robot could move and ‘draw’ freely for a fixed time period (1 minute) or until its energy ran out, whichever was sooner. The robot started in a random position and fitness was averaged over 50 trials.

The most fit robots all displayed similar behaviour: they made sweeping curves that alternated in direction and fanned out over a reasonable area of the ‘food zone’. This is a good strategy for systematic coverage of a large area without running out of energy and also produces some aesthetically interesting results. The image produced by the real robot, shown at the right of Figure 3, is qualitatively very similar to those found in the simulation but the semi-circular curves are closer together and the robot tends to draw a full circle at the start. These differences are mainly due to more wheel slippage on the real drawing surface (a shiny whiteboard) and can be rectified in the future by slight changes to the simulation. These initial results look very promising and, with more complex fitness functions and ‘ecological’ scenarios, including making use of sensors that give feedback from the current state of the drawing (see the discussion of ‘value’ earlier), it is very likely that richer drawings will soon be produced.

CONCLUSIONS

With 15 months of the project still remaining the team are cautiously optimistic that their goal of evolving minimally creative behaviour will be met. However the very significant problem of how to recognise and acknowledge such behaviour remains. There is considerable historical evidence that humans are inept at recognising new creative behaviours amongst themselves. Examples from the arts include: the 1863 Salon des Refusés where works by Monet and his fellow Impressionists were spat upon by “knowledgeable” Parisian art critics; the neglect of Bach as a major figure for over a century until his reinstatement by Mendelssohn; the status of graffiti in most modern cities and, in the scientific domain, the many examples used by Kuhn (1962) to illustrate his theory of scientific revolutions. Indeed Kuhn states his belief that … “disciplines change when old men die” (his use of the gender specific).

It is only relatively recently that humans have been able to acknowledge creativity in other animals so how will they recognise creativity when it emerges from an alife agent? The DrawBots represent a new mutation, a new form of life, within a human environment that is likely to be for the most part unsympathetic if not downright hostile. Mary Shelly’s Frankenstein (1818), Fritz Lang’s Metropolis (1927, restored 2002), Karel Capek’s R.U.R and the more recent Terminator movies are amongst the fictional illustrations of the unease most people feel when confronted by human-made autonomous agency. The homunculus, succubus and golem give a good indication of the theological attitude to human interference in work that should be the sole preserve of the gods!

Even a sympathetic observer is likely to have problems in qualifying the output of a strange new mutant lifeform. The research team are themselves subject to this constraint and have discussed the problem but have no clear solutions at this time. One possibility is to evolve an a-life agent specialised to recognise creative behaviour (Saunders 2001).

This latter possibility suggests that, in the not-too-distant future, we may live alongside colonies of artificial lifeforms. They will be adapted to many different environments and a wide variety of applications and will pursue creative endeavours that may be fully comprehensible only to themselves. We hope that the DrawBots project will provide some insight into this brave new and alien post-human world.

References:

Beer, R.D., H.J. Chiel, and L.S. Sterling. (1989) Heterogeneous neural networks for adaptive behavior in dynamic environments. In D. Touretzky, (Ed), Neural Information Processing Systems 1, pages 577–585, Morgan Kauffman.

Bird, J. and D. Stokes (2006a) Evolving Fractal Drawings. In C. Soddu (Ed.) Generative Art 2006 Proceedings of the 9th International Conference, pp. 317-327.

Bird, J. and D. Stokes (2006b) ‘Evolving Minimally Creative Robots’. In S. Colton and A. Pease (Eds.) Proceedings of the 3rd International Joint Workshop on Computational Creativity (ECAI ’06), pp. 1-5.

Bird, J. and D. Stokes (2007) ‘Minimal Creativity, Evaluation and Pattern Discrimination’. In A. Cardoso and G. Wiggins (Eds.) Proceedings of the 4th International Joint Conference on Computational Creativity.

Bird, J., D. Stokes, P. Husbands, P. Brown and B. Bigge (2007, forthcoming) ‘Towards Autonomous Artworks’, Leonardo Electronic Almanac.

Brown, P. (2008, forthcoming), From Systems Art to Artificial Life: Early Generative Art at the Slade School of Fine Art, in Gere, C., P. Brown, N. Lambert & C. Mason (Eds.), White Heat Cold Logic: British Computer Art 1960 – 1980, MIT Press, Leonardo Imprint

Boden, M. A., (2004), The Creative Mind, Routledge

Burnham, J. (1968), Beyond Modern Sculpture, Braziller, New York

Cliff, D., I. Harvey, and P. Husbands (1993) Explorations in evolutionary robotics. Adaptive Behavior, 2:73–110.

Eiben, A.E. and J.E. Smith, Introduction to Evolutionary Computing, Springer, 2003.

Floreano, D., P. Husbands and S. Nolfi (2007) Evolutionary Robotics. In Siciliano, B. and Khatib, O. (Eds.) Springer Handbook of Robotics, Chapter 63, Springer (in press).

Husbands, P., I. Harvey, D. Cliff, and G. Miller (1997) Artificial Evolution: A New Path for AI? Brain and Cognition, 34:130–159.

Jakobi, N. (1998) Evolutionary robotics and the radical envelope of noise hypothesis. Adaptive Behaviour, 6:325–368, 1998

Kay, A., (1984), Computer Software, Scientific American, September

Kuhn, T.S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962

Lippard, L., Six Years: The Dematerialization of the Art Object from 1966-1972, London: Studio Vista, 1973

Mandelbrot, B. B., The Fractal Geometry of Nature, W.H. Freeman and Company, 1982

Mondada, F., E. Franzi, and P. Ienne. (1993) Mobile robot miniaturization: A tool for investigation in control algorithms. In T. Yoshikawa and F. Miyazaki (Eds.), Proceedings of the Third International Symposium on Experimental Robotics, 501–513, Springer Verlag.

Nolfi, S. and D. Floreano (2000) Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines. Cambridge, MA: MIT Press/Bradford Books.

Olats (2007), http://www.olats.org/schoffer/cyspe.htm

Perris, M. (2007) Evolving ecologically inspired drawing behaviours, MSc dissertation, Dept. Informatics, University of Sussex

Saunders, R. and J. S Gero (2001) Artificial Creativity: Emergent notions of creativity in artificial societies of curious agents, in A. Dorin and J. McCormack (eds.), Proceedings of Second Iteration, Melbourne

Shelly, M., Frankenstein, Lackington, Hughes, Harding, Mavor & Jones, London 1818


Actions

Information

2 responses

29 03 2009
nitlogic

I started to build my graduate project which based on microcontroller, the question is what is better to use pic16f84 or Atmel 90c52 ? please help me.

2 09 2010
bre

I am a 8th grade teacher in NC and came across your site while researching some information about robotics for my tech class this year. I just wanted to thank you first of all for the great information and articles about robotics, and second let you know about a site we are putting together for teachers that might have some useful information for your site.

We would love it if you could write a few articles for us, or link to some of the current articles to help us spread trusted resources to other teachers. I have included a link to the site below in hopes you might want to write some articles for us or link to it.

Thanks and keep the great resources coming 🙂

Bre Matthews

http://www.thefreeresource.com/robots-timeline-of-key-events-in-robotics-and-resources

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s




%d bloggers like this: