Grieving

Grieving is an active process. Today it’s usually encapsulated by Liz Kubler-Ross and her five stages: Denial, Anger, Bargaining, Depression and Acceptance. I need to say this process applies as much to relationship cessation as much as it does to death.

I tend to the feeling that Kubler-Ross’s model is a little simplistic. I prefer the Growing Around Grief theory (which challenges the idea that our grief goes away or gets smaller over time. This theory suggests that our grief does not get smaller. But that we grow around our grief instead).

It’s also worth remembering that in the past, grieving was done in public. A person who had lost someone wore mourning clothes, or sometimes just a black armband, or other signifier. They withdrew from society. Widows were expected to mourn for two years and were allowed to wear grey or lavender only in the last six months of ‘half-mourning’ This was a message to society that you were suffering, and that others should respect your position, as well as your varied emotional states (or outbursts).

But this was an eccentric Victorian habit, and in the new ‘liberal’ world of the 20th and 21st centuries we decided that such formality was somehow unhealthy. Much in the same way, we also thrust death into the back-pocket of consciousness. We created an industry of therapeutic management, rather than a social acceptance of loss. With all that came an unspoken intolerance of public pain. Losing someone was something the individual had to get over with friends and family, but not show their despair in public. ‘Grief fatigue’ became an acceptable situation for those having to listen to someone’s mortal hurt. Keep it in the dark and inside your home was the modern thing. Don’t embarrass anyone in public, and most importantly, don’t let it go on too long.

I’ve started realising recently though that the Victorian grief model had merit. They formally ritualised death, separation, and the aftermath, so that public consideration for the exceptional status of loss was possible. Society knew what was happening to you. You didn’t need to constantly explain why you weren’t your normal self. It was advertised by your dress and your demeanour. Understood by all as a ‘protected’ state within the social order.

Victorian grieving may seem sentimental, morbid (or even hypocritical) to the modern mind, but I feel it did help in a counter-intuitive manner.

Don’t laugh, but if I could, I’d wear black and take to the veil. Yet I’d only look eccentric in today’s less demonstrative world. Some would say I was just overreacting or hysterically seeking attention.

Crazy, eh?

Why must you hurt me…

“Why must you hurt me, when I love you so? When I can do nothing else nor want to, for love made me and fed me and kept me in better days? Why will you cut me, and disfigure my face, and fill me with woe? I have only loved you for your beauty as you once loved me for mine in the days before the world moved on. Now you scar me with nails and put burning drops of quicksilver in my nose; you have set the animals on me, so you have, and they have eaten of my softest parts. Around me the can-toi gather and there’s no peace from their laughter.

Yet still I love you and would serve you and even bring the magic again, if you would allow me, for that is how my heart was cast when I rose from the Prim. And once I was strong as well as beautiful, but now my strength is almost gone. If torture were to stop now, I might still recover – if never my looks, then at least my strength and my kes.

But other week… or maybe five days… or even three… and it will be too late. Even if the torture stops, I’ll die. And you’ll die too, for when love leaves the world, hearts are still. Tell them of my love and tell them of my pain and tell them of my hope, which still lives. For this is all I have and all I am and all I ask.”

― Stephen King, The Dark Tower

Our very own totalitarianism

There is a mistake in thinking that totalitarianism simply means brutal dictatorships such as the old Stalinist Soviet Union, Nazi Germany, or present day North Korea.

Totalitarianism is any political regime where thought is restricted to acceptable forms of discourse and action. Acceptable, that is, to the regime in power at the time. This does not need to be done by propaganda and secret police brutality. One needs no gulags or mass shootings. Such processes are, essentially, primitive, since they are so observably present. They rely on mass hysteria and fear, which is difficult to maintain long-term.

Totalitarianism’s key feature, however, is to introduce ways of thinking that disallow contradictory argument without needing an obvious state presence or any clear coercion. It operates by subtle media manipulation, using the old Goebbels-style concepts of sloganeering, repetition, ridiculing opposition, ‘big’ lies, and creating political Utopian fantasies. Thus, those subject to totalitarian systems believe themselves to be free, even when they are not. They will also defend the regime in order to sustain their supposed freedom (i.e. freedom to think as the government thinks).

It is very difficult to combat a social order where thinking differently (and I mean THINKING, not emoting or fantasising) is seen as insane, antisocial, or subversive.

Governments that want to control thinking do so by establishing a discourse throughout the media that seems ‘common sense’, even though the rationale for such ‘sense’ very often lacks any evidence.

Similarly, unwelcome words are driven out of use, and others substituted to promote the ‘obviousness’ of a sociopolitical position. Practical democracy is managed so as to create narrow limits to voting styles that discount varied action.

By these criteria then, the UK currently has a totalitarian government. Not in origins perhaps, but certainly in its style and intents. You can make your own list of other regimes.

The Shape of the Universe in Brief

Contrary to what you might think, this topic is as much about philosophy as it is science. That it is deeply linked to such ideas as quantum mechanics and relativity does not diminish the key observation that needs to be made: that we are discussing the nature of Reality here. Something that has engaged philosophers and scientists for as long as there has been thought itself. All of our belief systems, considerations of identity, societal rules and obligations, sense of morality, etc. all rely on a consistent sense of reality which is the foundation of all other superstructural institutions. What we are as human beings is MADE by our sense of what is ‘real’, and where the divisions between the real and unreal lie.

To ask about the Universe is akin to goldfish asking about the goldfish bowl. Does the bowl have boundaries? If not, could it possibly be infinite? How did ‘the Bowl’ come into being? How come I, the Fish that I am, end up being in this Bowl and not some other? What am I supposed to do about being a bowl-ridden fish? Was the Bowl created? If not, then how is it here at all? Now substitute ‘person’ for ‘fish’ and ‘Universe’ for ‘Bowl’ and you may get the idea of how important such thought explorations are.

To make matters more interesting, human beings live in two ‘Worlds’: The Phenomenal World, and The Existential World. The Phenomenal World is that of our day-to-day experiences. How our World looks to us, feels to us, and how we relate to its obvious nature as presented to us by our senses. Yes, the Phenomenal World is ‘flat’! Why? Because that’s how it presents itself to us every day. I walk across a flat surface. You say it is curved? But only at certain global scales. Day to day I live with it being flat. The Phenomenal World has a habit of overriding the Existential World, which is the uncertain and scarily counter-intuitive world of logic. I say logic and not maths or science, because the latter both rely on logic for their validity. The Universe has to be consistently logical (even if it sometimes seems nonsensical) for us to know anything at all. And logic needs to pervade the Universe. Their cannot be logical anomalies, otherwise any Laws of Nature we could understand would be particular only to this region of space and not everywhere. Whatever ‘everywhere’ might mean.

So what is The Universe? The fishbowl analogy falls when we ask this question. The Universe is reality itself. Its ‘laws’ give us a lever to understand reality. It has been discovered (by Edwin Hubble), for instance, that all objects in the Universe are flying away from each other at great speed. This means that the Universe is expanding. One major consequence comes from this discovery: that the Universe must have been smaller in the distant past. Also, this expansion is accelerating over time. It is space-time itself that is expanding. Think of space and time as one thing. Think of them as essentially the same. You can’t have space without time (and vice versa) Then think of space-time as the ‘dough’ of a Universe-cake spotted with ‘raisins’ (stars, galaxies, worlds…): it is the ‘dough’ that is expanding, as the cake ‘cooks’, leading to the space between each ‘raisin’ growing.

If the Universe was smaller in the past, then how much smaller? Calculations give a start point for the Universe at about 14 billion years ago, That’s fourteen with nine zeroes. At that point, the Universe was very, very hot, and very, very dense. Some cosmologists speculate that it was a ‘singularity’, that is to say, a point of infinite energy and mass. However, this speculation has its critics.

If this is true, then logically the rules of quantum mechanics were in effect all those millennia ago. Quantum mechanics deals with the behaviour of minuscule objects, like singularities. It is a very precise and proven field of science, showing that at a fundamental level small objects behave totally differently from the way our Phenomenal World experiences would suggest. They can pop in and out of existence. They can jump from one place to another without seemingly moving. Furthermore, they can be particles and waves at the same time. And we are all made from these objects.

Hence, it is possible that the Universe just popped into existence unbidden. Much though this may offend both religious and common-sense belief, it is the logical conclusion drawn from following a chain of scientific reasoning (based on currently available sound evidence).

But what of the future?

How the Universe evolves will depend on one thing: how much mass is contained within it. Mass is simply a measurement of substance. Don’t confuse mass with weight, even though they use the same units. Mass is constant, but weight varies depending upon where you happen to be. For example a woman weighing 72 kilos on earth would be only 14.4 kilos on the moon, but her mass always stays the same at 72 kilos.

Albert Einstein also discovered that mass ‘distorts’ space-time (in his General Theory of Relativity). So, the curvature of space will tend towards a sphere depending on how much mass there is within it. Hence, if the Universe is spatially ‘spherical’, then there is a likelihood that it would remain as it currently is for all eternity.

But how much mass is there in the Universe? Not enough to cause any spatial curvature that is measurable. The Universe is, hence, essentially ‘flat’. This means that the accelerating expansion that I mentioned earlier will never be constrained. There will never be any contraction or stasis. The Universe will go on expanding… forever.

It will go dark. Stars and planets, which seem so permanent to us within our Phenomenal World, have but a limited life span. One day they will all end. As the last stars die, as the planets crumble away, and when all the Black Holes evaporate (as Stephen Hawking has shown they will), then in the very far distant future all there will be is blackness in all directions. Nothing to see. No orientation. Total emptiness.

If humanity survives that long, then it is certain that our Phenomenal World will have to cope with these changes. It seems to me then that we, for the sake of our future generations, should get used to the Existential World’s logic of the Universe. Its utter queerness and contradictions. Its uncertainties and mysteries. Because it is within this very understanding that our futures lie.

Original text written by Bea Groves. March 2024 (this is a slightly revised version)

Communicating about Communication

Otherwise known as Principles of Communication in Disability. Or something.

There is, of course, an irony in writing about communication. The very issue brings forth nuances that an essay doesn’t really satisfy. But I’m stuck with it. Best idea is to book me to talk further on the topic.

It may also seem that this is a purely theoretical discussion, but I must ask you to bear with me as I elaborate some aspects of communication that I feel have implications for understanding, identity and learning. It is a useful discussion for both disabled and transgender individuals.

Problems

The first big fault is to think that communication concerns the transference of information from one person to another. This is a very common popular assertion.

It assumes that the primary requirement of any communication is the clear and purposeful giving of internalised meanings to another person. I shall discuss the issues with internalisation later, but for the moment let’s deal with this illusion. The reason why this belief is so commonplace is because of the phenomenon of communicating in itself. How it obviously appears to us. We, as individuals, experience thoughts (notice I use the term ‘experience’ and not ‘have’) which we elaborate to others through (at minimum) verbal communication. We, erroneously as it happens, conceive of our linguistic exploits as being a direct representation of what we think. These enter the ears (or not) of other people and somehow are understood and assimilated.

Shannon and Weaver

But research indicates otherwise. In 1948 Claude Shannon and Warren Weaver published a now famous paper that introduced their Model of Communication. The Shannon/Weaver model is now a classic of communication studies, but was originally more directed towards machine processing in radio/TV and early computing than to speech. Here’s the diagram that illustrates its main concepts:

Diagram showing the Shannon and Weaver communication model

In this case, two entities are communicating. One is a transmitter, the other a receiver. Though it isn’t clear, either end of the communication system could be transposed. In effect, communication here is always two-way and simultaneous.

In this simple case, the transmitter wishes to send a message. In order to do so they encode information in some way (NB!), and then send this coded data via a channel (e.g. morse code over radio). During the transmission process, the message is subject to noise, which could be radio static, perhaps. At the other end it is accepted and decoded (NB!), and hopefully understood. But should the noise be too great, or transmission fails in some way, feedback is sent to ask for retransmission or clarification.

So far, so clear (I hope?!).

But what became obvious to sociologists and social psychologists was that Shannon and Weavers concepts were just as applicable to human interactions as to machines. This becomes plain when it is noticed that there is a ‘sub-level’ of encoding/decoding happening with the transmitter and receiver themselves.

So what is ‘encoding’ for human beings? Well, this is the multiplicity of usage of simultaneous channels that we habitually (and naturally) use in our discourse. By channels, I mean speech, languages and their sub-types, non-discursive vocalisations (‘Ermm’, ‘Oooh’, etc), bodily movements/signals, our deliberative appearance (how we represent ourselves via our dress…) etc. All of these overlap to form an encoded message that is received by another person or persons.

The encoding IS the message. It is subject to noise: stereotyping, prejudice, assumption, and emotional conditions within the situation itself. Such noise infiltrates the message and changes it. Hence the famous Marshall McLuhan phrase “The medium is the message” (The Medium, 1960), wherein it’s not just what is said, but also how, why, where and under what conditions a transmission is understood. In addition, it is indicated that any meaning that can be gleaned from a message is in the encoding/decoding process itself. Meaning, hence, is not ‘in’ the person, but only within the message management process.

One might wonder that human beings ever understand one another, considering how subtle this process is. It’s a miracle that we ever manage to gain any clarity, let alone the intimacy that communication affords us.

Social regulation

Which, of course, begs an interesting question: how do communicative processes maintain any clarity at all, when the range of possible distortion through social ‘noise’ and communicative limitations (not sharing a common language, deafness, unable to speak, limited bodily movement, etc) make it problematic. Let alone our commonplace fear of difference that haunt us, and may cause disruptions in the interpretation of even simple communicative signals.

The regulatory process in communication (if one can call it that) is a social matter that attempts to ‘make straight’ how comm’s works within historic cultures and their structural consequents (e.g. formalised national and geographic identities).

We create dictionaries (a history of word-menaing), teach grammar and syntax to our children, but most of all form common styles of channel formation on a day-to-day basis. As we use channels, receive effective feedback, and clarify noise, we create forms of communication that are culturally significant enough to be ‘standardised’. Attempts are made to reduce noise (using redundant channels – e.g. by talking, showing, and writing a message at the same time) and to eliminate message prejudices or assumptions by pressing for accepted forms of speech.

This is well known to disabled people, where there is a constant battle over meanings within our encoded transmissions. How do we use the word ‘disabled’? What is the connection between the word and the people who use it? The word ‘disability’ can be destructive or constructive. There is nothing inherently and fundamentally meaningful in the word itself, but only in its practical usage as part of social encoding.

Consider also that the accretion of cultural usage establishes linguistic meaning over centuries of human experience. This is very difficult to counteract. It’s not surprising then, that even if two people supposedly speak the same language (e.g. British and American English), that communicative noise still plagues us. To talk to another person about disability implies very careful encoding (picking of words), reduction of potential noise (avoidance of emotional stress points, and clarification of not just WHAT a word means, but WHY is is used in such and such a manner…). We need to pay close attention to effective feedback. It is also VITAL to understand that successful communication has an implied foundation of overlapping and intersectionalised cultures. We may not like this cultural conflict, but understanding how they came about, and how emotionally invested individuals are in the pride of their communications, is important to forming encoding systems and channels that work for all co-participants.

No Private Languages

The procedural matter of how meaning in language itself arises gives a third dimension to the structural considerations of Shannon and Weaver. In the early 1950s, philosopher Ludwig Wittgenstein produced an (unfinished) book, posthumously published as Philosophical Investigations. The volume was the partial accumulation of thirty years of intense thought around how human beings create meaning and how this becomes ‘fixed’ in the communicative process.

Earlier in his career he had assumed that this process was a one of forming mental ‘pictures’, where the component parts of the picture had an isomorphic application to words, grammar, symbols, actions, etc. This ‘Picture Theory’ dominated thought throughout philosophy right up until the 50’s. But it had problems. It didn’t quite account for subtleties of how raw meaning accrued to non-symbolic discourse (gestures, body movements, facial expression, etc.).

Hence the revised thought in Philosophical Investigations. And in particular one of the most controversial concepts in philosophy, the so-called No Private Language Argument.

To simplify. Wittgenstein imagines a person having a specific personal ‘sensation’. Perhaps a pain, twinge, itch (or combination of these) that they’ve not had before. They then mentally visualise a symbol to represent this sensation.

A while later, they have what they conceive of as the same sensation, and hence visualise the self-same symbol. But here comes the question: how does this person KNOW they are correctly using their visualised symbol? How do they know this demonstrative connection between sensation-and-‘word’ is coherently the same as when it was first conceived?

The answer Wittgenstein comes to is that it is impossible to visualise a linguistic or communicative symbol of any sort that is private to the person, and which can then be consistently used in practice. If all records of usage are in memory, then how can the user be certain that memory is correct? Certainly, more recent conceptions of memory validate this position: memory is a constructive facility and not a hard and fast record of events. We reconstruct our memories of the past on demand. How do we know if these memories are accurate? Well, on an individual level, this is not possible.

Unless..

Unless, of course they are verified and made ‘true’ by our interactions with others. What I know, what I mean, what I remember, and what I conceive of, all become ‘true’ because or our social interactions. These confirm or deny our consistency.

Hence, logically, there can be no languages (and by extension, communicative meaning) that are solely known to one individual. All communications, and the meanings that come from them, are public, interactive and evolving via the social connections we make.

More profoundly, since words such as ‘identity’, ‘self’, ‘I’, ‘me’. ‘you’, ‘she’ (etc) are all constructed publicly and achieve meaning in the social arena, then the idea that they refer to something ‘real’, such as a non-corporeal unseen psychological self, also falls at Wittgenstein’s hurdle.

For him, we are formed in the business of being social and become who we are via our communicative actions. He once commented “What is troubling us is the tendency to believe that the mind is like a little man within.” (Culture and Value, 1980, op. posth.), and for me at least, never a truer word has been said.

In my particular case, dissolving away this troubling little man within has been a revelatory process. I am released from seeking internalised solutions to external problems. I no longer need to seek psychological ‘perfection’ because such things are interactively constructed in how I conceive of myself amongst others. And these communicative acts construct the phenomenal self (how it feels to be me). If there is a battle to be fought over emancipation of identity, then it lies in our communicative world.

Consequences

I could go on. There are other aspects of philosophy and sociology that I might elaborate (The Sapir-Whorff Hypothesis, Habermas’ theory of the Public Sphere, Jean Lave’s Situated Learning, Heidegger’s phenomenology of the Self, etc.) all of which add to the above arguments. But you’ll be happy to know I will leave those to a different discussion. You have been warned.

What seems important to me though is how we, whether disabled, transgender, black, white, cisgender, gay or straight (no matter the intersectionalised term) can fight and effectively ‘win’ our social battles. The dialectic of liberative identity happens in real-time, in a real communicative world, where the phantasms of historic symbols and meanings can come under close examination. Notwithstanding the matter that I stated at the outset of this polemic: There is, of course, an irony in writing about communication. The very issue brings forth nuances that an essay doesn’t really satisfy.”, I still believe something can be resolved from this analysis, even though the words themselves (their context, style, and the assumptions you make about me) are laden with the burdens and scars of usage.

Be that as it may, I am hoping that one further Wittgensteinian quote is made clearer by my writing: “My propositions are elucidatory in this way: he who understands me finally recognizes them as senseless, when he has climbed out through them, on them, over them. (He must so to speak throw away the ladder, after he has climbed up on it.)” (Wittgenstein, Tractatus Logico-Philosophicus, 1922).

Or in other words, as a disabled person who happens to be transgender, the provocation is enough: after reading, considering and assimilating, then throw these words away.

BEG

References:

McLuhan, M, (1964) Understanding Media: The Extensions of Man (The MIT Press)

Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. (Bell System Technical Journal. 27 (3): 379–423.)

Wittgenstein, L. (1922) Logisch-Philosophische Abhandlung (Tractatus-Logico Philosophicus.). (Cambridge University Press, UK)

Wittgenstein, L. (1953) Philosophische Untersuchungen (Philosophical Investigations). (Cambridge University Press, UK)

Wittgenstein, L. (1980, Op. Posth.) Vermischte Bemerkungen (Culture and Value). (Blackwell’s, UK)

Children-in-Need-ism

First of all, can I say I’m not (and never have been) opposed to charitable giving. To be charitable is to show compassion and empathy for those who suffer, or are in situations where life and limb are at risk. I am a child of the ‘Live Aid’ generation. I too have bought the Band Aid single and thought about the Ethiopian tragedy.

But I also thought: why? Why had starvation happened, and why was it happening with such regularity across the planet? When I was child, I remember Biafra. Or being at school and taught to think of the poor children of Africa. I was taught that giving was a good thing, and even though I came from a relatively poor family, that giving to the charitable campaigns that came up every so often was the right thing to do. Even when I took my pocket money to the local newsagent to satisfy my sugar craving (a weekly event), there was usually a plastic (or ‘boody’) figure of a poor little child wearing a caliper ready to accept any pennies I had left.

Charity as a system was baked into life, as inevitable as breathing. It was supported by the morality I had been taught. A Judaeo-Christian morality. It was right to give to the poor, the disabled, and those affected by disasters. But, “the poor will always be with you”, so it was said. And that meant that campaigns and giving were going to be a permanent feature of society. Our society in the UK, and other societies elsewhere, both poor and rich.

Still, the question ‘why?’ haunted me. Why was it that we, as a community with moral standards, couldn’t do something permanent about the serious problems that haunted the world around me? Each year, I saw the same man-made disasters occur, and hear the appeals for aid. Each year I saw the charitable campaigns for the disabled recur, and hear the appeals for donations. No one ever seems to have said why this was a recurrent fact of life. Or why the situation for minority groups across the world seemed to stay the same, no matter how many millions of pounds were raised. The poor (the disabled, the forgotten, the unpopular causes) were indeed still with us. No matter how many times I put pennies into the box that the plastic child was carrying.

When I grew up and began studying philosophy it became plain that there was an ethical dilemma at play. Here I was, a disabled person, someone who was ‘in need’ by definition of my bodily morphology. Hence I welcomed the voluntary actions of the non-disabled in ameliorating the issues affecting my life. What’s not to like? They were doing good. I was (and am) grateful. Organisation were created to help people like me, huge efforts put into play, voluntary hours given freely, and money raised by those who had never met me. It was heroic

So why did I feel so unhappy?

Not just because of the dependency, and resultant stigma, of being a charitable recipient. But also an awareness that we, as a society, were dealing with the surface issues of inequity, but incapable (or unwilling) to deal with the root causes.

The matter of modern charitable giving, at heart, relies on an embedded individualistic vision of societal obligations. It says that the individual is responsible for all the matters that affect them, and must deal with these without making onerous demands on others. This concept evolves from unchallenged (‘hegemonic’) attitudes we learn as we grow. One of these is that it is up to ME to deal with my disability. I should not expect other people to have an obligation to change their life-styles in my favour. Other than those motivated by any affectations they may have. By affectations, I mean common emotional responses, such as pity, sadness, sentiment, guilt, etc.

In response to such affectations, we give to charity. The affectation goes away once the giving has been completed. And how much better if we enjoy ourselves whilst we’re doing it! We can mollify those slightly Victorian feelings of charitable holiness by watching entertainment as we donate. In a sense, we receive something indirectly in return for giving, and hence can absolve ourselves of residual mawkishness. It’s easier to be a ‘quasi-Mother-Theresa’ if you’re watching your favourite singer or comedian giving of their time on TV. There’s less of that embarrassing moral uncertainty to worry about.

It’s what I like to call ‘Children-in-Need-ism’. Every year the Children in Need charity marathon happens on TV. The organisers attempt to break the record raised across the UK each year. We give, or campaign, or sit in baths full of beans. We are sometimes even interviewed whilst giving. There are solemn moments when the benefits of last year’s fundraising are displayed. We may think: how lovely that these poor disabled kiddies (or elderly people, or Downs Syndrome children, or wheelchair users, or…) now have a better community centre, or bus, or can employ a fundraising worker. Just add your own recipients and outcome to this list.

Yet no one ever asks: why are there still children in need in the UK? In one of the five richest nations on earth, and a society that (at least in theory) says it cares deeply about the welfare of its youngest members. No matter how much is raised and disbursed, no matter how many heart strings are plucked, no matter how we feel we need to act to take away the sense of guilt and pity, the same old problems exist year to year. The names may change, but the issues stay the same.

As I said at the beginning, I am not in any way hostile to charitable giving. I am simply concerned about its futility in dealing with systemic matters of inequity and neglect. Please don’t let anything I have written above discourage you from giving to Children in Need, when it comes around. Or putting your loose change in a collection box. Such things do help, even if only peripherally. But when you do give, consider what could be done to permanently improve the situation for disabled people (and others) at a national and international level.

Then, ironically, we may work towards never requiring Children in Need again.

BEG

The Rules of the Game


“I think we have gone through a period when too many children and people
have been given to understand ‘I have a problem, it is the Government’s job to
cope with it!’ or ‘I have a problem, I will go and get a grant to cope with it!’ ‘I am
homeless, the Government must house me!’ and so they are casting their
problems on society and who is society? There is no such thing! There are
individual men and women and there are families and no government can do
anything except through people and people look to themselves first.” (Margaret
Thatcher, Women’s Own interview, 1987)


Thatcher’s 1987 interview statement has become legendary, of course, and I quote it in
greater length above than is usual. Her viewpoint was an on-the-nose summation of
neo-liberal thought. Setting itself as a fundamentally different from socialist points of
view, and making it plain that the final responsibility for human welfare was with the
individual concerned, not with collective action. Such things as collectivity were
harmful to freedom (see Karl Popper’s work), deterred entrepreneurialism, and
stagnated innovation. Hence, ‘lame ducks’ must go to the wall. It was harsh, but it was
‘true’ (from the Thatcherite point of view). And the truths of this world are inescapable.
But let’s test Margaret’s assumptions. To do so, a little ‘gedankenexperiment’ (thought
experiment) is in order.


Imagine you woke up this morning and had been transferred by some mysterious, unknown
power to a sub-tropical island. You are totally on your own. You don’t even have records and
a record player. Just you, and the clothes you’re in right now. The island is quite large. You
can walk from end to end in a day. It has a mild climate (no tornadoes or droughts, rain is
mild, sun is warm enough to go naked…), plentiful fresh water, and lots of tasty edible fruits
easily at hand. No animals or insects, but enough to eat and drink for a healthy lifetime.
A question: could you ever do anything immoral on your island?


It’s a curious question to ask, but and interesting one to consider as, at heart, it
examines what morality is at the individual level. Effectively it asks whether morality is
ever possible when there’s no one around to judge what right and wrong might be.
When you’re totally alone.


Ignoring the matter of a deity, my own view is that the word ‘moral’ goes out of the
window in such circumstances. Being alone, there’s nothing you could do that could be
judged as offending a moral code if there’s no possibility of it affecting anyone else. The
idea of morality IS a social consideration, rooted in the idea of there being others
around who could be harmed by one’s actions. Our freedoms are not just ‘freedom to’,
but also (because of other people) ‘freedom from’. See the work of Isaiah Berlin for
further discussion.


In the above little tail, all you have to add is one Man Friday for moral issues to come
into being. Reading Robinson Crusoe as a child it always seemed strange how Crusoe
justified his treatment of Friday, who was better fit to adjust to island life than Crusoe
was, but often considered inferior. As I have grown older and more aware of white
imperialist thinking, it has made my diagnosis of Defoe’s story easier, but no less
surprising.


So, my viewpoint (to cut a long story short) is that Thatcher’s assertion falls at the very
first hurdle of rationality. As soon as we are born into a world where there are other
entities, then we gain moral responsibilities to them (and them to us). Society IS the
matter of moral engagement with others who share our world. Out of such
engagement come our legal affairs, economic attitudes, social mores, etc.
It is in the rational business of occupying the same environment (the ‘Social Order’), that
morality evolves. Such questions as:

  • How do we share out limited resources?
  • How do we avoid harmful conflicts?
  • How do we work together to achieve things we cannot achieve alone?
  • How do we gain help when we need it?
  • What are legitimate transactions between individuals where both are ‘winners’?
  • What do we do about those who refuse point-blank to follow agreed rules?

… all arrive from the simple existence and recognition of others.

Contrary to the neo-liberal concept, the Social Order comes (existentially) first, and
then we are inevitably confronted by how we must deal with its many quandaries. The
more complex our interactions become, the bigger the thorny problems, played out in
competing needs and demands. We can see this illustrated in current crises world-wide,
both small and large, provincial and global. Global warming, for example, makes the
issue of social responsibility clear, not just within the UK, but in our impacts world-wide.


What exactly is morality? Morality is engagement with others, and how such
engagements are governed. In philosophy, discussion of morality comes under the
heading of Ethics. Though a number of folk see morality and ethics as interchangeable
terms, in effect they are very different. Morality is what we do in our engagements with
others; Ethics considers the arguments about how we justify those engagements. As
such, ethics is a meta-practice, examining moral situations. We may gain our morality
from family, religion, or simple habit, but ethics cannot be about just accepting these
traditions without questioning their validity. Hence, the Social Order and its practices
are the material upon which ethics is predicated.


We also cannot simply ignore or bypass morality. To do so is unethical in and of itself. To
ignore moral challenges is to be (at worst) solipsistic, or (at least) morally naive. Why?
Because the impact of one’s actions (however trivial) on others is a necessary aspect of
simple being alive and acting within the world. And this is reciprocal.


Morality is unavoidable, it is existentially necessary to any kind of living. Note, that I do
not take a mystical stance on morality. What I mean by this that I neglect ideas of some
non-human, all-powerful arbiter of the ‘The Good’ that we can appeal to outside of our
own concerns. This is not a criticism of religious belief. Just a critique of how a non-corporeal entity could have any influence on our existential world. But more on that in
some other article.


The Big Question is, therefore, how do we evolve rules for living with each other?
The list of philosophers and political thinkers who have discussed this matter is very
long indeed, and some of them have had a very profound impact on our lives. For
example, the USA’s political affairs are still heavily influenced by the philosophical
pronunciations of a first century Jewish prophet. Indeed, the ubiquity of his influence is
so extended that most of us think in a moral sense based on his tradition, even without
realising it. Because of this subtly hegemony, philosophers have been suspicious of rule-making that has absolutist properties. Absolute moral rule-making has come under
heavy stress in recent times. Especially after World War 2, when moral principles were
re-examined in the light of the Holocaust. Such complexities and disparities do have
knock-on effects however, one of them being moral disorientation or moral nihilism.
This is understandable, as all traditional moral rules have been questioned at one point
or another, leading to conflicts of ethical confidence.


Nevertheless, I am of the insistence that moral rules of a transient and dialectical type
are possible, if not absolute rules. By transient, I mean moral criteria that are applicable
to today’s social order, but may change in the future. By dialectical, I mean that the
moral changes that occur come about by stressing the current rules of behaviour to
breaking point, and then adjusting them to the new social conditions in force. The
philosopher Immanuel Kant said that human beings should be treated as an end in
themselves and not as a means to something else. The fact that we are human has
value in and of itself. We need not justify the fundamental value of other human beings,
because we would not wish to critique our own value as people. Such matters as
human value thereby become universalised. The idea that, in ANY social order could
function without the paramount value of human existence (not just ‘life’), is surely an
unspoken ‘given’ for the world in which we live? To return to first principles, as discussed
earlier: put two people on our island and the immediate evolved consideration of how
we value one another comes into being as relationships grow more complex.


But acceptance of this principle has far-reaching consequences. Kant’s rule implies that
the exploitation of human beings is a fundamental wrong, couched in the fair
functioning of the social order, and not excusable under any exception. Yet exploitation
exists. Human beings whom we have no direct contact with are used to create the
privileged world we inhabit. They work towards our happiness, whilst not being granted
fair happiness under the self-same moral rules we consider vital to our supposed
equitable society. Fairness, equity, justice, common good, etc. are all seen as slogans of
choice when the feeling takes us, but rarely utilised as central to social organisation, or
allowed to cascade down through our politics. We may ‘feel’ them to be important, but
they are easily dismissed when the broad road of pragmatism beckons. Achieving an
end without considering the morality of the means, is a common feature of both
pragmatism and utilitarianism, both of which are a traditional foundation of British
politics. They are not wrong in themselves, but wrong when they break the ‘no
exploitation’ rule. The shortcuts to getting-things-done (politically) are paved with
exploitational neglect. Just ask anyone at the bottom of the UK’s economic pile.


Above all, the social order produces a wide variety of responses to engagement. We
tend to categorise these (inaccurately, I think) as ‘private’. ‘public’ and ‘third sector’. The
structural growth of all these elements eventually runs up against the Kantian rule
against exploitation as the pressure to achieve grows overwhelming. This is especially
true of organisations that grow large, and where the centralised decision-making
process becomes more and more distant from those who are affected by their decisions
(customers, clients, workers, etc). In such circumstances, the messy question of human
interaction becomes substituted by easier technocratic solution-making. There is a
consequent loss of accountability and moral sensibilities. Examples of this kind of
development are very easy to find. Indeed, most of us have confronted these in our lives
at one time or another. That we take such incidents as just ‘given’ indicates how easily
we have been seduced by neo-liberal non-values, and how reluctant we are to belabour
exploitative behaviour in our lives.


It strikes me then, that one of the key priorities of independent organisational growth
should be an adherence to Kant’s principle at root and branch:


“Rational beings can never be treated merely as means to ends; they must
always also be treated as ends in themselves, requiring that their own reasoned
motives must be equally respected.”
Groundwork of the Metaphysics of Morals (Kant, 1785)


We may not always be able to practically comply with the rule’s fierce exigencies, but a
confrontation with its sense at each step of our decision making processes seems a
sensible way of maintaining moral probity.


BEG