2 03 2009



Anthropic Entity1

Gil. & Geo. Color UnCrop.

Anthropic Entity(s) 2


Anthropic Entity(s) 3

Recent trends have made it clear that simulation model fidelity and complexity will continue to increase dramatically in the coming decades. For example, the beginning of the mission to build a simulated brain is already announced (Graham-Rowe, 2005). Using intelligent agents in simulation models is based on the idea that it is possible to represent the behavior of active entities in the world with their own operational autonomy. . . . The factors that may affect decision making of agents, such as personality, emotions, and cultural backgrounds, can also be embedded in agents. . . Abilities to make agents intelligent include anticipation, understanding, learning, and communication in natural and body language. Abilities to make agents trustworthy as well as assuring the sustainability of agent societies include being rational, responsible, and accountable. These lead to rationality, skillfulness, and morality (e.g., ethical agent, moral agent). 4

The rise of the thesis of S (W)AI5 during the 20th century is hardly surprising given a situation where the primary exemplar for “intelligence” was human sociocultural, linguistic and cognitive activity. A seminal example of this process of equivocation can be found in the opening paragraph of the document, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, that was submitted in 1955 to the Rockefeller Foundation requesting the funding to set up a conference. The document announces, “The study will proceed on the basis of the conjecture that intelligence (which is reserved for humans) can in principal be so precisely described that a machine can be made to simulate it.“ 6

Some 51 years later we find the computer scientists and software engineers, Yilmaz, Ören and Aghaee, announcing a similar objective concerning the development of “intelligent agents in simulation models . . . ‘. Setting aside the theoretical and pragmatic difficulties, and, the immense ethico-juridical and theological issues that such a project engenders, this research raises foundational questions with regards to the nature of this ‘operational autonomy.’


In 1973, the theoretical physicist and cosmologist Brandon Carter presented his now famous (or infamous) paper, “Large Number Coincidences and the Anthropic Principle in Cosmology”7 at a conference to mark the 500th birthday of Nicholas Copernicus. Ostensively written to critique “Dirac’s Large Number Hypothesis”8 which was proposed by the English cosmologist Paul Dirac, Carter, in actuality, had also directed his critique at one of the foundational principles of modern cosmology, the Generalized Copernican Cosmological Principle.9 A modern derivative of Copernicus’ argument that relative to any given planetary body, the observable universe will be approximately the same, the ‘Generalized’ version asserts that from any given point the observed universe is homogeneous and isotropic in nature. Or conversely, according to the Principle of Mediocrity, there is no privileged point x, such that any observation(s) made from x will be privileged over any observations made from point y.

Although not the first to do so, the later work of the Princeton theoretical physicist R. H. Dickie set in motion serious questions about the sustainability of the Generalized Principle. In his “Letter to the Editor” published in the journal Nature titled, “Dirac’s Cosmology and Mach’s Principle,” Dickie reviewed the Weyl (Hermann) | Eddington (Arthur) | Dirac (Paul) debate over the derivation of a set of extremely large dimensionless numbers with regards to specific physical and astrophysical properties of the universe. What was important about the pure numbers for the gravitational coupling constant G, the Hubble age of the universe T, and, the mass of the universe M relative to its visible limits was that all had surprisingly, numerlogically coincidental orders of magnitude. Why? Dickie’s response was contained in a brief letter published in a 1961 issue of the journal Nature where he offered10 what Carter would later name as the argument from the “Anthropic Prtinciple.” Dickie noted that if there is, as Dirac argued, an underlying causal connection between the three numbers with respect to the physics at the quantum and cosmic levels, then for this argument to hold, one would have to hypothesize that this connection was “independent from time.” The issue of the value of T was significant because Dirac argued that given the evolutionary nature of the universe it must be expected that each of the pure numbers will vary over time. Dickie noted that if one were to assign a random value to T taken from a large number of possible values, then “the present (actual) value would have a low “a priori probability.” Consequently, the present correspondence between the numbers “would have been highly unlikely.” Dickie’s proposal to resolve the issue was to argue that given the relative evolutionary age of the universe, T cannot take on a large range of values insofar as they are limited by a time constrained, reciprocal inter-relationship between the physicio-biological “requirements to make physicists,11 and, the presence of ‘physicists’ to carry out the required observations. As Abraham Zelmanov proposed in his dissertation, Chronometric Invariants,

The Universe has the interior we observe, because we observe the Universe in this way. It is impossible to

divorce the Universe from the observer. The observable Universe depends on the observer and the observer depends on the Universe. If the contemporary physical conditions in the Universe change then the observer is changed. And vice versa, if the observer is changed then he will observe the world in another way. So the Universe he observes will be also changed. If no observers exist then the observable Universe as well does not exist.12

In 2004, at a Colloquium held at the Collège de France, Carter reflected back on his 1974 paper on “Large Number Coincidences . . . “. 13 He used the occasion to propose a revised definition of his original formulation of the ‘Anthropic Principle’ in light of the history of sometimes highly controversial interpretations of his proposal. In his 1974 paper he had provided for both “Weak,” and “Strong” versions of the Principle. As he stated, “these predictions do require the use of what may termed the anthropic principle to the effect that what we can expect to observe must be restricted to the conditions necessary for our presence as observers.14 In it’s ‘Weak’ formulation his statement asserts the trivially self-evident claim that to make any observations there has to be an observer, be that ‘observer’ an instrument or an ‘physicist,’ and of course, the presence of an observable. However, his formulation could also be interpreted as a strong version of the principle that appeared to permit far less defensible claims such as Zelmanov’s, ‘If no observers exists then the observable Universe as well does not exist.’

In light of these issues Carter explicitly reformulated it as an Bayesian “microanthropic” principle which asserts that “the a priori probability distribution for our own situation should be prescribed by an anthropic weighting, meaning that it should be uniformly distributed, not over space time (as the ubiquity principle would require), but over all observers sufficiently comparable with ourselves to be qualifiable as anthropic.”15 The Oxford philosopher and mathematician Nick Bostrum reformulated Carter’s concept of “observer(s),” which he defined as any given “observer(s) in any particular brain state (subjectively making an observation e)”.16 Another of Bostrum’s revisions was the strengthening of its Bayesian, probabilistic mode of statistical analysis with the addition of the “Self-Sampling Assumption,” (SSA) One should reason as if one were a random sample from the set of all observers in one’s reference class.”17 The (SSA) foregrounds the two central theoretical assumptions of the Anthropic Principle. The first is the use of Bayesian, probabilistic analysis, while the second is the epistemological concept of the ‘observer(s)’. It is the second that is the most crucial component because the reference class of the concept can include any entity that functions in an observational mode, be it an instrument or a ‘physicist’. The most important member of the class is, as Bostrum notes, any ‘brain state (subjectively making an observation e)’. This class is the foundational premise of the Anthropic Principle – the Argument from the First Person.


In the same 1974 paper Carter, as a concluding codicil to his formulation of the Strong Principle, paraphrased Descartes’ assertion with respect to his attempts to frame an indefeasible ground for the justification of knowledge claims as “cognito ergo mundus talis est,” (‘I think therefore the world is such as it is.).18

For Descartes, indefeasible epistemological claims must arise from the methodic, universal application of a hyperbolic, skeptical interrogation of the contents of, and relation between introspective, self-reflective consciousness (res cognitans) and sensate experience of the body and the external world (res extensa). Though the application of hyperbolic skepticism can call into question the contents of all introspective and sensate experience, the very activity of the cognito is its own guarantee, since to deny the presence of the act of thinking, would be to generate a logically contradictory claim. As Descartes notes, “So that it must, in fine, be maintained, all things being maturely and carefully considered, that this proposition (pronunciatum) I am, I exist, is necessarily true each time it is expressed by me, or conceived in my mind.19

The core of Descartes’ project was initiated in the great section of the ‘First Meditation’ where he laid out the sceptical arguments concerning ‘dreaming’ and non-dreaming states, and, of course, the penultimate, unbounded argument from the ‘Evil Genius,” or the Argument from Theocentric Stability.20 Founded on the Thesis of Similarity with respect to the traditional Platonic argument from “epistemically distinct worlds”21, Descartes concluded that the ‘Evil Genius’ argument was false because the sensate world, and its relation to the methodic process of thinking are grounded in a theodicy of the possibility of error and certainty insofar as “it follows that existence is inseparable from him, and therefore that he really exists: not that this is brought about by my thought, or that it imposes any necessity on things, but on the contrary, . . the necessity of the existence of God, determines me to think in this way . . .”.22

Consequently, at the center of the First Person (Anthropic) experience is the perpetual presence of deconstructibility – of systemic ‘doubt’ and “error.” – This is the case because if there is no ground other than the activity of the cognito, then any first order knowledge claim(s) and consequent derivations drawn from these initial claim(s) must, themselves, be subject to second order constraint of the very activity of the cognito – the presumably untranscendable limiting condition on all first order experiential activity. As Bostrum notes, “Observation selection effects are an especially subtle kind of selection effect that are introduced not by limitations in our measurement apparatuses but by the fact that all evidence is preconditioned on the existence of an observer to “have” the evidence and to build the instruments in the first place.”23

To understand the problem of “error’ and systemic doubt, is to understand that Descartes confronts two correlative epistemological issues – ‘naive realism’ and the problem of first person to third person data. These two issues are structurally interrelated insofar as the success of resolving the latter issue is derivative of our success in theoretically exposing the naivety of humanity’s every day sense of reality with regards to consciousness’ interaction with the world. One might call this ‘naiveness’ the taken-for-granted nature of phenomenal experientiality in that what “I,” or “we” experience is, what we experience. Therefrom, the penultimate theoretic issues are, can one autonomize folk psychological understandings of self-consciousness from its ‘naive realism’ and two, if one is to successfully argue that one can, then that success is contingent upon the further success of the attempt to reground the argument for the presence of an autonomized self-consciousness in third person data. This is precisely what Descartes attempted to do with his ‘Evil Genius’ argument. The Argument from Theoretic Stability is, historically, the first example of both Carter’s Strong Anthropic Principle and secondly, represents the essential realization that one must establish a theoretic relation between first, to third person data.

Famously, Descartes’ realization also represented the strong – metaphysically dualist – principle of the irreducibility of the contents of mind inasmuch as mind cannot be framed in terms of theoretic (machinic-physicalistic) explanations, or conversely, the non-extensibility thesis; that theoretic explanations cannot be extended to ‘consciousness’ (res cognitans) inasmuch they are restricted to the domain of nature (res extensa).24 In the contemporary 20st century setting of philosophy of mind the Australian philosopher David Chalmers remarked, with regards to the non-extensibility thesis, that there is an “explanatory gap . . . between the functions and experience.”

This remark occurs in his 1995 paper, “Facing Up to the Problem of Consciousness,” where he confronts the question of the nature of ‘qualia‘ (the experientiality of consciousness) – the sense that, when we are given explanations in terms of the physics of light, of neurobiology, of the constructionist mediation of discursive systems, we are still left with the mystery of our experience of the absolutely seamless, immersive sense of being in the world – “Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of (the qualia of) vivid red? . . . There is an explanatory gap (a term due to Levine 1983) between the functions and experience, . . .”25


In 2003 the young Swedish philosopher Nick Bostrum published the paper, “Are You Living in an Computer Simulation”26. In his conclusion he notes, “Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”27

It goes without saying that much academic ink has been spilt over Bostrum’s paper especially given its science fiction like quality, The paper itself is part of an larger debate concerning ‘Doomsday’ or, human extinction scenarios that have their roots in Carters work, in particular, in his 1983 paper, “The Anthropic Principle and its Implications for Biological Evolution.” I will not rehearse Bostrum’s argument, nor necessarily agree, or disagree with the paper’s conclusions, rather, I will remark on two unmentioned reasons that I suggest underwrite some of the resistance to the very idea that human beings could, in actuality, be simulatable, if we define simulation as our development of artificial entity(s) that operationally achieve sociocognitive parity with humans. This is the threat that such a possibility poses to two of the most generic, identitarian properties of personhood – the belief in our substantive, autonomous identity as persons possessing the property of being the person whom we experience as “me” or “I.” The second is ‘our’ autocentric relationship to other forms of living, and emergent entities.


Imagine a given person x, negotiating the contingencies of a posteriori experience – what Alan Turing called the sheer ‘informality of Behavior‘. That is, they live one day out of all the days that constitutes their life; a day being normatively defined as a 24 hr day starting at 12:01:01 AM and ending at 11:59:59 PM. More formally, they have to negotiate a given set B  (A  A) where B is the subset of all possible decision scenarios that must be made on, for example Thursday, March 27, 2007, and where (A  A) represents the totality of all possible implemented, and unimplemented decisions. Furthermore, whatever decisions are decided and acted upon by x, these decisions will be a function of one: intentionalized, and, non-intentionalized (unconscious) mental processes and two, third person, contingent events occurring during Thursday.28 Therefore, given that (A  A)) delineates N . . . ” possible epistemic alternatives, then B represents an enormous, though finite set of alternatives.

We certainly have at our disposal a number of meta-analytical frameworks to select from as we attempt to understand how x negotiates the day of Thursday, March 27, 2007. The late Michel Foucault, for example, would have appealed to the a priori, interlocking constraints of discursive ‘formations,’ ‘enunciative fields’ and the ever present movement of power relations – the ‘micro physics of power’ – all of which are constitutive of the histories of everyday life. What certainly would not be present is any appeal to the traditional epistemic descriptions of phenomenological experientiality, empiricist theories of referentiality, or the naive realism of the folk psychological categories of ‘personality,’ ’emotions’ and ‘learning’ deployed by Yilmaz, Ören and Aghaee. Furthermore, in Foucault’s case, the usage by each person of the term ‘I” has no function other than its indexial, enunciative role as nominative and objective personnel pronouns spread out on the plane of discursivity and power relations. Notwithstanding its genuine explanatory potential, a theory like Foucault’s cannot satisfactorily take into account a situation which every human person has to negotiate the a posteriori modalities of daily life insofar as the epistemic negotiability of the everyday existence of x represents a teleologically constrained, epistemic process em(bodied) in the neuro-phenomenological, sociolinguistic formation of x. Furthermore, Foucault certainly would have not accepted the commensurable position that human discursive, socio-political, and quotidian history are embedded in, and constrained by the evolutionary history of a specific physiciological, neurobiological, and cognating entity (Anthropic) named the ‘human being.’

Notwithstanding these points of dispute concerning positions like Foucault’s, there is a significant point of intersection between his nominalist critique of the Cartesian-Kantian, autocentric tradition of the transcendental ‘I,’ and, one of the single most interesting neuro-philosophical bodies of work to emerge in recent years on the question of consciousness and self-consciousness. Thomas Metzinger, Director of the Theoretical Philosophy Group at the Department of Philosophy of the Johannes Gutenberg-Universität, Mainz, remarked, “The ‘phenomenal first-person perspective’ is one of the most fascinating natural phenomena we know, and in a certain sense we are this phenomenon ourselves,”29

In describing consciousness as a “natural phenomena” care must be taken in how we are too interpret this phrase because it is quite easy to infer a folk psychological reading that references an experience to which we attribute existential substantiveness. Like Foucault, Metzinger denies that any such “things” indexed by the personal pronouns “I,” “you,” “we,” “she,” or him” have any ontological status. As the title of his most recent monograph makes clear, “we” are “No One.”30 Describing it theoretically as PSM – the “phenomenal Self-Model – a “virtual” “simulation” or “avatar”- that is systemically generated at the operative level of the neurobiological, neurocomputational, and functionalist activity of the brain. As a “pre-reflective,” evolutionary development this “episodically active representational entity” permits ‘us’ to phenomenally experience the world as a non-representational, immersively transparent experience.31 As such, it generates an illusionary (virtual) experience of a “singular, temporally extended experiential self” functioning in the world. 32

Metzinger argues that we can never convert the act the self-reflexive introspection into an meta-reflective act of self reflection on first person consciousness insofar as it’s nature is “nonconceptual,” subdoxastic” and “phenomenally transparent.” Consequently, ‘we’ are, by definition, always fundamentally in error with regards to some aspect about our(selves) and the world. Metzinger states, “What makes a phenomenal representation transparent is the attentional unavailability of earlier processing stages in the brain for introspection. The instruments of representation themselves cannot be represented as such, and hence the system making the experience, on this level and by conceptual necessity, is entangled in a naïve realism: In standard configurations, one’s phenomenal experience has an untranscendably realistic character.”33

Setting aside the issue of whether Metzinger’s arguments are sustainable as philosophical proposals his work is now playing a major theoretical role in the “Cronos Project” that is being funded by the Engineering and Research Council (UK). Headed by Professors Owen Holland’s and Tom Troscianko, the project’s “adventurous” research objective is the attempt to develope the first fully conscious, machinic entity. The question as to whether this project will fail or not, is for me, utterly irrelevant since there is, I would argue, a far more profound issue that we all may have to face. The history of AI research is riddled with repeated failure which, as a consequence, represents one of a number of contributing factors that have fostered the continued presence of an autocentric belief in our own uniqueness. Granting the fact that from an evolutionary standpoint ‘we’ are extremely unique, this fact cannot then be used to support the stronger claim that ‘we’ are absolutely unique.

As Carter realized, the Generalized Copernican Cosmological Principle was having a delitorious effect on cosmological research, hence his counter proposal of the Anthropic Principle. It is important to understand that this ‘Principle’ did not sideline the Principle of Mediocrity. The point of the Anthropic Principle is that it was not proposed to establish our uniqueness, but rather to note that, as a particular type of evolutionary, Anthropic entity, we must take into account the ‘selection effects’ that are a function of being human has on research. One interpretation of the Mediocrity Principle is that there is no privileged Anthropic entity x, such that x can be privileged over Anthropic entity y. I would strongly suggest that what we will, in the near future, have to critically confront the emergence of new forms of Anthropic, (living) entities be they robotic, or, as Yilmaz, Ören and Aghaee note, simulable ‘agent societies (that) include being rational, responsible . . .accountable . . . trustworthy . . . an ethical agent, moral agent).’ This is conclusion, and warning, was outlined in the 2007 report, “Roboethics Roadmap” issued by the European Robotics Research Network.


1 McKeon, Matt and Susan Wyche. (2004). “Morgan MacDonald’s” SL Profile” in Life Across Boundaries: Design, Identity, and Gender in SL, Georgia

Institute of Technology, p. 30.

2 Gilbert and George. (2007). Gilbert & George Exhibition. Photo: Jo Loosemore.

3 Cronos Robot. Cronos Project: Machine Consciousness Lab, University of Bristol

4 Yilmaz, Levent, Tuncer Ören and Nasser-Ghasem Aghaee. (2006). “Intelligent Agents, Simulation, and Gaming,” in Simulation & Gaming, Vol. 37 No. 3,

339-349. P. 339, 342.

5 S (W)AI, or the principles of Strong & Weak Artificial Intelligence, were first proposed by the American philosopher John Searle to distinguish between Weak AI and SAI which, in his seminal paper of 1980 “Minds, Brains, and Programs,” he described as, “according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind.

6 McCarthy, J., M. L. Minsky, N. Rochester, & C. E. Shannon. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.  John McCarthy, Department of Mathematics, Dartmouth College, Hanover, NewHampshire. p.1. This was also the first time that the phrase “Artificial Intelligence” appeared in print. However, neither McCarthy nor Alan Turing were responsible for the birth of the neural computational thesis. It was first presented in the paper by McCulloch, W. S. & Pitts, W., (1943). “A logical calculus of the ideas immanent in nervous activity.” Bull. Math. Biophys. 5, 115–133 .

7 Carter, Brandon. (1974)). “Large Number Coincidences and the Anthropic Principle in Cosmology,” in M. S. Longair, ed., Confrontation of Cosmological Theory with Astronomical Data (Dordrecht: Reidel, pp. 291-298,

8 Dirac, Paul. A. M. (1937). “A New Basis for Cosmology.” in Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, Vol. 165, Issue 921, pp. 199-208. As Dirac stated, “The Large Numbers hypothesis asserts that all the large dimensionless numbers occurring in Nature are connected with the present epoch, expressed in atomic units, and thus vary with time. It requires that the gravitational constant G shall vary, and that there shall be continuous creation of matter. The consistent following out of the hypothesis leads to the possibility of only two cosmological models. One of them, which occurs if one assumes that the continuous creation is a multiplication of existing matter, is Einstein’s cylindrical closed Universe. The other, which occurs if one assumes the continuous creation takes place uniformly through the whole of space, involves an approximately flat Minkowski space with a point of origin where the Big Bang occurred.

9 Rudnicki, Konrad. (1995). The Cosmological Principles. Jagiellonian University, Cracow, Poland, p.86.

10 The Russian physicist and mathematician Abraham Zelmanov was the first to propose the argument from anthropic experience. See, Rabounski, Dmitri. (2006). “Zelmanov’s Anthropic Principle and the Infinite Relativity Principle,” in Progress in Physics, Vol. 1, .35-37.

11 R.H.Dickie. (1961). “Dirac’s Cosmology and Mach’s Principle,” in Letters to the Editor, Nature, Vol. 192, 440-441. p. 440

12 Zelmanov A. L. (1944). Chronometric Invariants. Dissertation. First published: CERN, EXT-2004-117, 236 pages.

13 Carter, Brandon. (2004). “Cosmology: Facts and Problems,” Colloquium, College de France,

14 Op. Cit. p. 291

15 There is little doubt that Carter’s Bayesian formulation was drawn from Bostrum’s work given the citation of Bostrum’s work.

16 Bostrum, Nick. (2005). “Self Location and Selection Effects: An Advanced Introduction,” Faculty of Philosophy, Oxford University,

17 . (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. New York, N.Y., Routledge. p.57.

18 Op Cit. p. 294.

19 Descartes, Rene. Meditations on First Philosophy. John Veitch Translation of 1901, Home page Descartes’ Meditations, Meditations II, 3.

20 Michel Hanby notes that “Descartes’ fundamental principal negates the traditional God only to reconstruct him as a causal hypothesis and guarantor of clear and distinct ideas.” Hanby, Michael. (2003). Augustine and Modernity. London: Routledge, P. 270.

21 Newman, Lex. (2005). “Descartes Epistemology,” Stanford Philosophy,

22 Op Cit. Meditations, 5, 10.

23 Op Cit. Bostrum.,

24 An extremely influential variant of this tradition is represented by the work of the late Michel Foucault. The major difference is, of course Foucault’s rather behaviorist principle of ‘exteriority” which jettison’s the Cartesian theory of mind. As is well known, Foucault argues that we do not have a ‘mind’ at all.

25 Chalmers, David. (1995). “Facing Up to the Problem of Consciousness,” in Journal of Consciousness Studies 2 (3), pp. 200-219. p. 205. One of the central texts to the question of Consciousness is Chalmers’ 1996 monograph, The Conscious Mind: In Search of an Fundamental Theory.

26 Bostrum, Nick. (2003). “Are You Living in an Computer Simulation,” in Philosophical Quarterly, Vol. 53, No. 211, pp. 243-255.

27 Ibid. Bostrum, p. 11.

28 For example, tripping over an ant. No matter how remote, or absurd, the possibility of someone actually tripping over an ‘ant’ is, this scenario is logically possible. One cannot, in the face of the principle of contingency assert unconditionally, a priori what will, or will not occur on that Thursday.

29 Metzinger, T. (2004)”The subjectivity of subjective experience: A representationalist analysis of the first-person perspective,” In Metzinger 2000a. Revised version in Networks, 3-4: 33-64.

30 Metzinger, T. Being No One. The Self-Model Theory of Subjectivity. Cambridge, MA: MIT Press., 2004.

31 Metzinger relies, amongst many others, here on the work of Antti Revonsuo, Head of The Consciousness Research Group at the University of Turku, Finland.

32 The distinction between consciousness and self-consciousness is essential to Metzinger’s theory. Phenomenal experientiality is comprised of immersive consciousness (PSM), and, self-consciousness as directed intentionality, what he refers to as the “Phenomenal Model of the Intentionality Relation (PMIR),”

33 Ibid. Metzinger, p.




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: