Feeds:
Posts
Comments

David Cameron’s recent philippic against Oxford University was idiotic, ill-informed and worrying, especially so since it seemed to imply it to be the PM’s belief that it is perfectly acceptable to carry out an appraisal of an institution or a student-body using solely the criterion of race. While he’s at it, then, he should also criticise the British Basketball Male Team for only including 3 white players (and no Asians) in its 12-men squad. Fortunately most organisations who select their members on the basis of ability are, unlike this country’s leader, blind to the predicted racial composition and prefer to focus on what matters (athletic and intellectual capacities and potential, for instance).

Politics aside, the row that is now brewing will likely include appeals to the notion of prejudice, construed in a uniformly negative light as obstructing clear and rational judgement (indeed, I have used the word with pretty much the same connotations in one of my previous posts). Now, this is obviously true in some cases, and a number of prejudices can only be classified as outright wrong. However, there is more to the concept of prejudice than is usually, and somewhat simplistically, assumed. The notion, in fact, extends far beyond over-generalisations pertaining to other people’s racial or national origin (e.g. “all Japanese are workaholics”); our reliance on pre-judgements (from the German rendition of prejudice as Vorurteil) is a pervasive feature of our thinking and rarely does it take this fully negative – or indeed fully verbalisable – form.

Prejudices help us negotiate our way through the battery of meanings we encounter in our daily transactions with the world; they do not form a part of explicit judgements, like “this person there is my sister Lucy”, or “David Cameron is out of his depth as Prime Minister”, but they underlie them nonetheless. How? By providing an implicit understanding of, and familiarity with, the background conditions which make predicational statements like these sampled before possible.

Obviously, we need to unpack this claim a little more to make it plausible; the definition of prejudice, if it is to serve the purpose I have just assigned has to be expanded. And it is precisely when understood as a pre-judgement that it performs this enabling function; that is, as an implicit, culturally specified part of our understanding of the ways in which we can come into the awareness of meaning and meaning-creation practices of our society. It is not cognitive (hence “pre-“) or propositional (i.e. expressible as “I believe that [A, B and C]”, or at least mentally entertained as such). It is rather an inculcated set of abilities allowing us to conform to, and recognise certain pre-established norms and habits of a society and culture in which one grew up.

For instance, when I encounter a table and a set of chairs around it, I have a culturally-acquired implicit awareness that chairs are for sitting that does not form a part of my conscious appreciation of the situation ahead of me (e.g. that one of the chairs is upside-down) but enables me nonetheless to make some sense of it. I do not have this capacity in virtue of having been taught explicitly about what kind of things chairs are and what they are for, but in virtue of having been brought up in a culture in which one sits on chairs. Therefore, I do not consciously judge that this is a function of that chair before me and then reason that I can sit on it. I simply do sit on it, because I have a prejudicial understanding of its role.

Such prejudices are obviously not only harmless, but also essential to normal everyday functioning in a society and among its artefacts. But as I become more and more immersed in them, many of the background practices I do not notice are sedimented into this implicit understanding – and this might lead to a certain intellectual laziness when trying to make sense of some more complex issues which I can simply begin to pass over as so obvious as not to merit a pause for reflection (in the same manner that I pass over the intellectual appreciation of a chair in favour of just taking a seat). Naturally, I do not need to re-evaluate (unless in a very specific context) my pre-cognitive understanding of chairs and whether they really are for the kind of use I, and all those around me, have put them to. However, I may need to re-examine some less innocuous practices and implicit grasp of things that my dealings with the world have shaped and automatised in me, since they too, in time, merge seamlessly into the background structure of my making sense of the world and may yet turn out to be doing less good than harm to how I interact with others and form explicit judgements and beliefs.

But while there are clear cases of harmful or indefensibly over-generalising prejudices on the one hand, and those that simply enable us to make sense of how the world around us is and live with it in a normal, human way on the other, there inevitably will be a grey area in which, without a careful self-examination, we may never really become aware of the extent to which such a pre-judgement is helpful or detrimental to our life with others. Which makes it all the more urgent to be ready and able to engage in a critical appraisal of one’s own prejudices.

Technological progress and the way in which it affects the patterns of our everyday behaviour has been a potent subject for thinkers assessing its social, political and existential implications and inevitably has made its way to the popular consciousness. Yet, the routes thinking about technology can take are – perhaps surprisingly – not nearly as manifold as the advances they try to assess. It seems that, most generally, we can distinguish two main approaches to the problem: on the one hand, the social-historical perspective focused on the here-and-now of human-technology relationship, on the current trends in our interactions with the machinery; the perspective firmly rooted within the present temporal horizon, supplementing it with some amount of historical knowledge; on the other hand, there are those who try to envisage the impact of technologies of tomorrow on the future of humanity; this approach, if it is to avoid being dismissed as mere science-fiction speculation needs to have a firm grounding in the knowledge of current technologies and, more importantly, their scientific underpinnings. It is therefore no surprise that it is mainly taken up by people with extensive scientific training.

These approaches differ not only in the temporal reality they address, but also in their assessment of the technology’s impact on our lives. In the latter line of thought, at least as exemplified by Ray Kurzweil or Nick Bostrom, one can discern hints of a strong optimism, epitomised by the belief that what good we have so far got out of the technological progress is only a little proportion of what lies ahead. Technology, it is claimed, has potential not only to make our lives even easier and more comfortable than they are now and improve the livelihoods of those least fortunate; it may be able, crucially, to transform the very fabric of human nature, by prolonging our lifespan or catapulting our cognitive capacities to stratospheric levels.

These are audacious claims, often met with a stare of incredulity by the audiences with which Kurzweil and Bostrom share their views, and yet one could imagine a similar facial expression of a prehistoric caveman who has just been told that one day people will walk on the Moon. Still, for my part, I fear that any predictions about the future development of humanity through technology have such unstable epistemological foundations that they need to be approached with extreme care and even the most considered of predictions have to face the problem of innumerable variables that can never all be taken into account. Human nature, historical change and even the direction the technological progress may take are far from quantifiable even in most extensive of formulas.

The way we can assess today’s impact of technology seems, in this context, to be a relatively less demanding enterprise, and one of the themes that has run through a number of discussions is that of the technology’s negative influence, for instance its role in alienating human beings from each other and from the understanding of nature and its workings. Albert Einstein famously claimed that modern, “civilised man” had much less understanding of the tools which he used than had a “primitive man” who was able to fashion such everyday essentials as bows and arrows easily by himself. Karl Marx, on the other hand, decried the mechanised process of production which alienated the worker not only from the result of his work but also from the very process of working, understood as producing something for one’s use; waged labour created a distance between the worker and the end-product by introducing remuneration rather than the ability to use what one produced, as reward. Even more outrageous have been Zygmunt Bauman’s contentions that the horrific efficiency of the Holocaust was a natural consequence of the ongoing march of modernity (and technology) rather than its macabre aberration. Here the alienation reaches its extreme consequence of separating human beings so fundamentally as to allow one group to ignore the very humanity of another.

This way of thinking about the impact of technology as a force assisting in setting human beings apart from the natural environment (not to mention polluting and destroying the environment itself) and from one another has also found its way into popular culture. Pixar’s Wall-E’s depiction of the human race, alienated and literally uprooted not only from the Earth, but also from the everyday natural human interactions (now hijacked by the technological medium which becomes an end in itself, rather than a means of communication) is just one recent example. One can gather, therefore, that there is a strong intellectual opposition to technological progress, insofar as it is seen as dehumanising and de-naturalising the human beings’ way of life, subjugating it to numerical efficiency and squeezing the unquantifiable out from the essence of being human.

Where the two approaches converge is in their unshakeable belief that technology has the power to transcend its original purpose of being a useful tool and become a transforming influence over the way in which we, human beings, see ourselves and the environment in which we dwell. It not only assists, but also shapes, the social and cultural interactions.

It is a defensible position, of course, but perhaps one that is overemphasised. Despite astounding progress of technology, we still seem to recreate human relationships that have been laid out for us by poets and playwrights from Antiquity onwards; we still relate to the emotional turmoil of an Oedipus or an Antigone, even though our times – in terms of their social structure and technological advancement – are as distant from theirs as any.

Naturally, such understanding will never be free of the influence of our own historical perspective, but that such works as those alluded to still elicit an emotional response testifies to a certain universality that they possess; just wherein that universality consists, we may be at a loss to understand, but this does not diminish its powerful effect.

But, all this human connection maintained across ages notwithstanding, does technology threaten to really turn us into isolated, alienated individuals and slowly deprive us of the emotional connection with one another and with the natural world?

After all, it facilitates communication, it enables us to cross in under a day distances our ancestors would travel for months, it forges new and more democratic ways of spreading and discussing ideas, it brings to our awareness the voices of people which – without its help – would have remained mute, it procures access to a wealth of information and endows us with freedom to select what we want to find out about.

Naturally, I may not be able to build my own computer or even understand the process whereby I can watch animals of the Amazon on my own TV screen. But this seems to me to be a fair cost to incur in exchange for a deeper appreciation of numerous, previously inaccessible, aspects of the world and the ease with which I can communicate with others that the technological progress has brought me and which my grandfathers and great-grandfathers could only have dreamt of.

Every vacancy advertised by one of UK’s newer universities comes subscribed with the following statement:

At the University of… we are committed to an inclusive approach to promoting equality and diversity. We aim to have a more diverse workforce at all levels of the institution and particularly welcome applications from people from minority ethnic backgrounds and people with disabilities, who are under-represented in our workforce.

Now, at first glance, the intentions expressed here are laudable: after all, it can only be beneficial if representatives of all social strata partake in the workforce and feel that the road is open for them to take positions of power and responsibility in every career that they may wish to pursue. If nothing else, it is at least a clear marker that a given society affords its members multiple possibilities of professional development. It is also little more than common sense from the perspective of recruiters: employers who discriminate against some applicants on the basis of considerations which do not pertain to the candidate’s abilities and potential for performing the role deprive themselves of a considerable portion of the talent pool and may end up hiring less qualified and suitable candidates merely due to their own (employers’) irrational prejudice. Moral implications notwithstanding, this is simply bad for business and no profit-seeking entrepreneurs can afford in the long run to let the unreasonable discriminatory opinions dictate the  recruitment policy, not least due to the very fact that it unnecessarily limits the scope of potential employable talent.

This seems to be the primary reason why any recruiter willing to find the best equipped people for the job would readily embrace the principle whereby the applicants are selected by taking into account their suitability for the role and nothing else, thus ensuring what intuitively is the very embodiment of equality; i.e. the principle that allows all candidates to be judged solely on their merits without worrying that their background or any other consideration that is irrelevant to their ability to perform job-related duties will weigh in any manner on the recruiter’s choice. Equality, in this sense, is constituted by the recruiter’s ensuring a level playing field for all applicants. It is clearly a sensible position to take (it offers the most reliable method for selecting the best candidate)  and the University seems to commit to it wholeheartedly.

Unfortunately it pays the principle of equality nothing more than lip-service, as the sentence immediately following its very expression demonstrates. The aim “to have a more diverse workforce” flies in the face of the policy of equality, insofar as the latter consists of disregarding – during the recruitment process – those characteristics of the candidates that will not in any way bear on their ability to perform the required tasks. The commitment to increasing diversity invites – indeed, requires – categorising prospective candidates as well as current employees along racial, social, sexual or religious lines (depending on what kind of diversity the employer seeks), rather than merely on the basis of their qualifications, and making recruitment decisions informed, to an extent, by these seemingly irrelevant considerations. Equality goes out of the window as soon as factors not only beyond the candidate’s control, but also without any bearing on his or her abilities, are taken into account when appointments are made.

The perilous nature of this position becomes clearly visible after a brief reflection: first, it reinforces the perception that such personal characteristics as sex, race or creed do play a role when it comes to successful job applications; secondly, it implies that without such imposed requirements, under-represented groups would be unable to gain entry into recruiting institutions (a specific example of how it comes about can be found here). Both these propositions smack of an era when it was a widely held belief that some groups are inherently better (in one way or another) than others; but certainly they have no place in a modern society. Which makes it all the more shocking that the statements implicitly endorsing these views (as for instance the one cited at the beginning) are not only widespread but also seemingly uncontroversial. It beggars belief that in a society as sensitive to any real or imagined cases of racism, sexism and homophobia as the British one, it remains entirely acceptable to have employers adopt recruitment policy which overtly allows for such characteristics as race or ethnic origin to play a role in the selection process.

Amid recent castigations thrown at the government policy of spending cuts, one accusation particularly stands out for its powerful combination of sheer absurdity and ostensible profoundness. The charge that the cuts are motivated by nothing less ominous than an ideology seems to have made its way seamlessly into the collective consciousness of the Coalition’s detractors and become a legitimate ground for criticising it (the practice engaged in both in the highest echelons of the Labour party, and by grassroots student protestors). Why this position has gained any currency in the midst of other arguments that can be levelled against the Chancellor’s financial policy and that can at least be conducive to advancing a genuine political debate is a moot point. However, despite its undeniable popularity, the view that cuts are bad just because they are ideology-inspired is misconceived, as it ignores the truth that ideology is essential not only for guiding electoral campaigns and forging political identities, but also for governing per se.

However much pervaded by dirty tricks, corruption and questionable alliances and allegiances, politics, after all, is a battle of ideas. In order to secure a mandate to rule, parties engage in campaigns, showcasing their proposed ways of governing the country and denouncing those embraced by their opponents. The voters, ideally, come into the awareness of the campaign programmes and choose to vote for those parties whose views on managing the nation they find most endearing. Naturally, this fairly straightforward process is often, to an extent, perverted by factors which fall well outside the scope of programmatic proposals, such as, on the one hand, the voters’ taking into consideration (half-consciously, perhaps) personal appeal and charisma of candidates (or maybe even their looks), and, on the other hand, schemes such as character assassinations and smear campaigns, down to voter intimidation and fraud. Nonetheless, all things considered, it is a fairly safe guess that it is the attractiveness of the policies the parties promise to pursue when in power that forms the chief motivation for supporting one or another. (For those unwilling to accept this statement at face value I recommend visiting a comments section under any political opinion piece that major newspapers publish online. The debates originating there give a good indication that it is the political measures themselves that constitute the main concern of the majority of posters.)

The point of these rather oversimplifying remarks is to draw the attention to the fact that party programmes are not, usually, just sets of opportunistic proposals conjured haphazardly on the spur of the moment merely to address currently occurring social and economic problems. Rather, their origins are traceable to the collection of general ideas a party has about what constitutes good government, or what is good for the people, what will benefit the nation etc. Through its function as the source of proposed policies this system of rather stable (though by no means immutable), rather abstract views ensures consistency in the party’s long-term operations and, crucially, allows it to be recognisable – and electable – by the viewpoints it endorses, at the same time distinguishing it from its competitors. In short, it gives the party an identity which determines the nature of its proposals. It embodies what a party “stands for”.

This theoretical base can be expressed as a set of propositions outlining a general direction of policy in crucial areas (such as economy, society, international relations) that a party sees as worth pursuing which ultimately derive from essentially moral principles (these usually have a somewhat axiomatic character, i.e. they are not explicitly defended, but rather asserted, as in “We hold these truths to be self-evident”). These moral axioms form the first, most abstract, and generally least controversial (in contemporary western polities at least), theoretical foundation. It is at the level of what specific beliefs are derived, by argument, from this foundation, that the genuine differences arise, and they increase as we move on to practical policy initiatives that a party advances to meet the current challenges.

This tripartite hierarchical structure illustrates how moral and philosophical theories bear a direct influence on any party’s practical proposals on how to run a country. It also shows that without such a basis parties would simply degenerate into a mass of uniform unidentifiable collective bodies with their goals and prospects inscrutable. While not impossible in principle, the existence of such a state would render the whole election process meaningless and vapid.

In light of the above, it becomes clear that what a party needs to function in a democracy as a recognisable, consistent entity with a legitimate claim to partake in the political discourse and, from time to time, rule, is a broad set of principles from which its philosophical beliefs and, ultimately, practical initiatives are derived, viz. an ideology. And, if its electoral promises originate from this ideology, then by making ideologically motivated changes in the legislation it is simply carrying out the voters’ will. Ideology represents the values that political parties wish to instil when in government; on the basis of the compatibility of the party’s ideology with individual voters’ viewpoints parties amass the support and hence the mandate required to rule the nation and to pursue the values outlined by their ideological commitments. Put simply, every party’s legislative measures, when in power, are ideologically motivated, insofar as the party fulfils its electoral promises (which, to reiterate, have their source in the ideas the party endorses). As a result, accusing any party of being ideologically-inspired in its actions constitutes the criticism not of the actions as such – which, obviously, is warranted and called for, in principle – but rather decries the fact that the governing party governs! And this, naturally, should have no place in a mature political debate.

PS. It is worth mentioning that most contemporary political disagreements are fought out not at the level of axiomatic principles – where the only viable strategies would be to show that the opponents’ professed beliefs are inconsistent with each other (“If you believe A, how can you also endorse B?”) or to appeal to elusive moral intuitions and try to prove that some of the principles espoused by the adversary disagree with them. Rather, most battles range over the issue of which philosophical beliefs follow from these axioms and which courses of action can best achieve the desired results. While this offers an increased choice of weapons with which to attack the adversaries’ cause (not only can contradictions be pointed out, one can also dispute the validity of reasoning that leads to concluding with a certain philosophical belief (“commitment to liberty does not entail economic laissez-faire-ism” as well as advance quasi-empirical arguments against the proposed practical measures (“they tried this in Texas and it didn’t work”)), it also requires a shared commitment to certain values. In an excellent book on totally different topics, William Blattner outlines the situation as follows:

’red-state’ Republicans and ‘blue-state’ Democrats have divergent conceptions of what a free society means. Our divergent conceptions of freedom are typically embodied for us in the models we use to talk about freedom. For a red-stater, the freedom to pray at school and the freedom to own and use a gun are examples of freedom that blue-staters do not accept. For a blue-stater the freedom not to be pressured to pray at school and the freedom to seek an abortion are examples of American freedom that red-staters do not accept… There can be a failure of consensus, precisely because these are divergent conceptions of American freedom, freedom which is a common reference point for all of us. Americans who disagree with the conception – say neo-Nazis – stand apart from the community. (2006, p. 68, my emphasis)

The broad agreement on what the reference points are limits the scope of potential conflict and, especially in Western Europe, can create an appearance of the stagnation of the political debate stemming from the perceived lack of genuine differences between the mainstream parties. Yet, even if the disagreements do not penetrate to the very basic level of moral principles, in part vindicating this perception, there are nonetheless a number of important practical and philosophical controversies which fuel major political disputes. But few arguments have as shallow a foundation to ever contribute to a meaningful debate as the one about a party actions’ being ideologically inspired.