You are currently browsing the category archive for the ‘Philosophy’ category.

twitter_logo_sThere’s still plenty of twitter buzz and hype, if you’re looking in the right place. And it doesn’t look like it’s going away quite yet.

Principally, the discussion seems to centre around questions about the point, or lack there of, of Twitter. Isn’t it just a distraction? Is it worth while? etc etc.

Paul Bradshawe at the Online Journalism Blog is all in a flurry about twitter, and making some excellent points while he’s at it. Some of these are drawn from there.

Apart from the fact that all this discussion ultimately helps twitter grow, there are a few issues particularly guilty of generating negativity.

One comes from the Twitter site itself where it principally sells itself with the tag

stay connected through the exchange of quick, frequent answers to one simple question: What are you doing?

Sure, it’s simple, but it doesn’t exactly imply any serious use or depth. It sounds like: ‘trivial, and ultimately something you could do without’. Now, like the Metro free sheet, perhaps this is all Twitter needs to make some money, but it would not really be deserving of hype and acclaim.

Closely associated, but not propagated by Twitter itself, is the claim that ‘it’s just like facebook profile updates’. Elicited response: ‘err, so?’ Quite right.

Another source of curious negativity comes from the association with celebs. Those who think that many celebrities already make meaningless noise blindly consumed by a bored Joe Public will think Twitter is just another slimy tentacle reaching into peoples lives and distracting them from being themselves (or something similar).

The thought is that surely if Twitter is anything good it should amount to more than a new medium for celebrities to mouth off.

Thus there are plenty of quizzical whats-the-point questions to keep the debate up. What fuels the other side of the debate? Why don’t we just throw up our hands and say that it has no point? That it’s ‘dumb but fun for some’?

I think the answer is that Twitter feels like it has potential – something as yet mostly unrealised but glimmering and promising somewhere in the near future.

Quickly-breaking-news-stories and organising-large-groups-of-people  point towards this potential, but aren’t quite it -the it that many are anxious to be a part of but can’t articulate yet. That it’s unarticualed probably accounts for the fact that it is so often described as ‘answers to ‘what are you doing now?’ ‘ or ‘like facebook updates’ – and hence more quizzicle questions. The supporters want to say more, but they don’t quite know what to say yet.

I’ve enjoyed Twitter so far – it’s a bit like being in a small (but growing) exclusive club, like Facebook in the old days  – but have a lingering feeling that there really does need to be more of a point to the Tweets.

How can I make it purposeful? Both in what it put out, and in who I follow. Here’s a thought:

“Twittasophical: Philosophy in 140 characters.”

Since much of philosophy is about asking questions, it might sit well with some probing 140 character questions. Perhaps it should be called Philosophy140.

In the meantime though, I can try and make at least some of my tweets non-trivial, and keep looking, with all the others, for that elusive ‘it’.

I follow – 15

Following me  -12

Verdict in progress: Fun, but looking for something more.

See also:


It’s been quite here, very quiet. I’ve been taking a break, and I’ve been processing the world and my life a little – input but no output, if you like. At least, no output on this blog – as I begin the final year of my PhD I’ve been gearing up to make it happen. In other words, I’ve actually started writing it, like for real.debategraph

But enough of my personal life, what has awoken me from blogging hibernation? A new web tool called Debategraph.

In true just-out-of-hibernation fashion, I’m a little late with this one, but I still think it deserves some lovin’. It’s a ‘wiki debate visualization tool’, which aims to allow the organisation and presentation of different positions and issues within an argument. Since I spend my day-to-day reading, processing and writing arguments I’m all ears when it comes to new ways to present subtle and complex issues.

I like it because it seems to be a  way of avoiding sound bite simplifications and could prove to be a very useful tool to help students, and anyone else interested in learning and forming opinions, get their their heads round just how difficult some problem or other is.

Of course, to really get anything out of it (or put anything into it) it requires a decent amount of your attention. You can’t just read a headline to decide what’s right, nor can you slap up your first thoughts and opinions with complete disregard for what else is being said (well, I guess you can, but it’s not so easy or tempting). But in that sense it reflects more accurately the difficulties of real debates, something often hidden by internet rants.

It’s not perfect, by a long shot, but the idea gets my thumbs up. It’d be great to see philosophy students using it for a bit of collaborative learning.

Are you one of those people who finds it a great deal easier to be angry with strangers than with people you know? I am. The person in the street who looks like they are about to cause trouble, or the driver shielded by glass and steel who didn’t indicate a left turn and nearly knocked me from my bike. Nameless and distant, there is minimal sense of shared humanity. I cannot see their perspective, they are ‘other’.

The philosopher Richard Rorty didn’t believe that we could reason our way to universal human rights. No rational argument could do the work required because the idea of an objective (non-relative) set of principles was a myth (he thought). I’m not sure I agree with his relativism, but I do like his solution. We should tell stories.

We need a route to empathy to remove the strangeness of strangers; to make them no longer the others but one of us. Rorty believed that stories offered this route and hence offered a hope of bridging global chasms between cultures.

If I knew the story of the boy setting fire to the bin, or of the man in the silver BMW who didn’t indicate, I might think of them differently.

Stories are powerful tools for breaking down barriers, and that’s why I think Manchester based Asylum Stories is a great project. They are us.

How do you connect philosophy and new-media? How do you connect philosophy and old media, for that matter? Philosophy is ideas, and ideas don’t translate well to visual media easily.

So I was interested by what this guy is trying to do.  I think it’s fun. Does it work? Do you have any idea what he’s going on about, if you don’t know what he’s going on about already?

It seems I’m not the only philosopher (does that sound too grand? – perhaps ‘student of philosophy’) that enjoys the odd Wordle now and then.  After wordling my blog a few days back, I wordled a chapter in my thesis, and then all of Hume’s Treatise.

When I finish my PhD, I shall try and remember to Wordle it, and then I shall get it framed and hang it on my wall.

It seems that Maverick Philosopher Dr. Vallicella is a linguistic prescriptivist. In a lengthy post he takes issue with a number of ‘misuses’ of the English Language, brought about by the many ‘thoughtless lemmings’ that use it.

I fully agree that it is possible to misuse language, one can (and I often do) get spelling, punctuation, and grammar wrong, and one can use terms, phrases and metaphors incorrectly. So I wouldn’t want to say that there is no right or wrong about language use.

Some may claim that ‘anything goes’ since language is social and organic, but it is precisely the social aspect of meanings that allow the possibility of error. The error might not be ‘absolute’ or ‘objective’ but does arise if an individuals use goes against established norms. The norms are there to make language possible and I’m all in favour of setting them down in books and teaching them to children (and adults).

However, this has to be weighed against the fact that meanings do change over time and across different communities. Dictionaries are updated, metaphors die and pick up new associations, standards for ‘correct’ and ‘incorrect’ use shift.

Hence the kind of prescriptivism that Vallicella goes for seems somewhat narrow minded and takes the idea of error in language use to unhelpful extremes.

For example, he takes issue with ‘irrelevant qualifiers’ such as ‘litmus’ in litmus test, and ‘track’ in track record. Apparently we should just stick with ‘test’ and ‘record’ because the uses have nothing to do with acids or running. But this is silly because the additions are there as metaphors -we say ‘litmus’ test to indicate that it is a reliable, clear and quick test for something or other, usually with a binary type result (unlike an endurance test or a spelling test or something). I’m sure you can work out the same for ‘track’ record.

Many of the other points are about using loaded expressions for argumentative advantage (‘assault weapon’ in debates over gun laws, ‘baby’ used to refer to a fetus) but this is so widespread that I’m not sure it counts as a misuse of English -it is just a particular use of English for the point of persuasion. We shouldn’t pretend that we can argue in totally neutral terms – every word carries associations and implications (I think analytic philosophers sometimes forget this).

Vallicella  seems blind to his own preferences here. He claims ‘there is nothing offensive about ‘illegal alien’: it is an accurately descriptive term’ (so we shouldn’t start using ‘undocumented worker’). Like it or not ‘illegal alien’ is a very loaded term. There may be reasons to keep using it in some contexts, but we can’t pretend it carries no argumentative force. In a debate over the legal status of a group of people, referring to them as ‘illegal aliens’ might be seen to beg the question just as much as ‘assault weapon’.

Well, there’s my feedback.

I might be ashamed to call myself a philosopher, after this from philosopher AC Grayling. It is an illogical and ridiculous rant against faith schools that gives atheists a bad name (an atheist agrees). I half wonder, in fact, if it is meant as an ironic bash at the kind of articles that find their way to places like Comment is Free.

I might be ashamed, but I know that being a member of a kind does not mean that you share all characteristics with other members, or responsibility for what they do. We don’t criticise all wheeled vehicles for polluting, just because cars pollute (bikes don’t).

But Greyling writes that religious people:

traditionally employ and always threaten torture and execution for those who do not accept their theories, who to gain their ends sometimes engage in war, massacre and murder, and at other times use bribery, brainwashing, and techniques of preying on the poor, sick, depressed and traumatised

…and so ‘religious’ people should not be involved in running schools. Apart from the above being just false as it is put (‘always threaten torture’?), these are also accusations that you might make against white people, so does that mean white people shouldn’t run schools? Or people in general, for that matter, perhaps we should have schools run by robots…

It’s the kind of argumentative fallacy that I teach first years to avoid. (But then, as I said, perhaps it not supposed to be anything more than a silly rant.)

Does the web provide a radically different platform for story telling? How do we engage with and tell stories on a digital multimedia platform? These are some of the questions we had to ask ourselves at the MediaNet Academy last week. What we came up with was an exploration into the possibilities of digital story telling.

One of the marks of engagement with the internet is a loss of the linearity of experience found in other forms of media. Reading a book or watching a film or TV show follows a fairly standard beginning to end path, one dictated to us, to a significant degree, by the producer (author, director, journalist).

The stories we consume may experiment with unpredictable narrative paths (like the film Memento) but the path we as consumers take is given to us.

There will always be some, of course, who choose to read a newspaper article or novel by jumping in and out of the text at different places (less so with a rented film, and not at all for broadcast TV and cinema) but these are rare exceptions.

In using the web, however, it seems to me to be becoming far more common for us to forge our own path and guide ourselves around stories:

Scan the headlines or a news feed, follow a link, watch streamed footage, check wikipedia for background, check some related posts, scan a few comments… get bored, pick up a new story, follow some more links, find a different take on the subject, think about getting a coffee and the work you should probably get down to doing.

How deep we wish to go with a story, how much background, how many opinions, how much fact checking… so much of this is now up to us. Who is the narrator of the story we experience? Are we?

In small computer room with whitewashed walls and a single small window three of us worked for 36 hours to produce a ‘narrative platform’ for the story of Jo Bloggs. No one piece of text, no one video or radio show, not even one single website presents the story as a whole. The story is told through the viewers engagement with the network of links to blogs and pages and Facebook profiles. Every journey through the story will be different as the viewer finds her own path.

It’s a fictional story though the boundaries are fuzzy (but this isn’t new, think historical novel). If you want, you as a character can get involved in some of the same ways that we all have the potential to become part of the news stories that we read and watch every day.

Interesting stuff. D’ya get me?

A very real frustration of mine at the moment is that the more I read and learn about a particular topic, the less I feel qualified to say anything about it. This poses a particular problem for writing a PhD, but I don’t doubt that it is relevant to whole host of other pursuits.

Each paper and book I read opens up new avenues of enquiry, new possible positions to take, and (most importantly) new reasons to think that what I was going to say needs amending or totally revising.

I find myself frantically pursuing arguments and trains of thought but getting no closer to being able to write coherently about them.

The more I learn, the more I realise that everything I write will get it a bit wrong. Even as I write this I hear possible objections and replies in my head – reasons why what I have said is not the whole story, or how the problem should be solved.

One tempting response then is not to write at all. Even with a deadline looming I am struggling to get through this barrier.

And I know it will have to be this way: pragmatics kicks in and the need to write out weighs the need to be right. I must undertake the task of producing a thing that can exist in its own right, detached somewhat from my internal search for a better thing to say. Let it stand alone and not be shy of its own imperfections.

Herein, for some, lies the key to more than writing a thesis – to blogging, to engaging with our society and culture, to joining the conversation.

Last night MPs debated and voted on the ‘saviour siblings’ issue in the House of Commons. In the debate, Conservative MP Dominic Grieve condemned the misleading rhetoric of those who said that this wasn’t an issue of ‘designer babies’. Whether good or bad, Grieve argued, we shouldn’t play down the fact that there is an element of design involved: the babies are designed to have a specific tissue type that will be of potential benefit to an older sibling.

The fact that this is not the only reason for the creation of the life, and the fact that the babies are not ‘designed’ in any other way, does not deter from the fact that some level of design is involved and should be recognised as such, Grieve claimed.

His respondents hit back with the assertion that this was just a matter of selection and not of design: the required embryo is selected from an otherwise random process.

No doubt Grieve’s claim was intended to prevent his opponents from shrugging off the rhetorical force of the ‘designer baby’ criticism so easily. Whilst they may not be ‘designer babies’ in the fullest sense that that phrase implies, they are ‘designed’ nonetheless (he claimed).

Does selection count as design? Perhaps when it is intelligent and directed. It might even be though that design is inherently a matter of selection (that selection is necessary for design): I can design a pattern on a grid by selecting the colour of each square, and the composer designs his piece by selecting which note to place next.

However, it is not obvious that (intelligent) selection is sufficient for design. Our intuitions perhaps go the other way when a single act of selection takes place from among a set of options, as with selecting an embryo from a batch. The finished ‘product’ then does not seem to have been designed in itself, since a different selection would not have lead to any changes in that baby, but a different baby all together. We would not say that we in any way designed the house we live in simply because we chose it from a set of available houses to buy: a different choice would not have meant changes to this house, but a different house all together.

It is this, I believe, that is doing the work for those who claim it is not design: there is no way this particular baby could have been different given our selections, because our selections happened at an earlier stage. The consequences of this, however, are that given big enough selection of embryos with different traits, it should be possible to be quite specific in choosing the traits of your child without having any ‘design’ involved at all. You can hear the lab saying ‘We don’t design your child to have brown eyes and an aptitude for maths and music, we just select it from a wide range of potentials.’

What’s the moral here? Perhaps it is just that language is slippery, and that rhetorical force presses in from all sides (there is much more to ‘designer baby’ than issues of design). But it is also that that this is an issue of control. Design or no design, ‘intelligent selection’ is control.

As it happens, I have no firm opinions, moral or otherwise, on this issue.


RSS My Tweets

  • An error has occurred; the feed is probably down. Try again later.