Friday, August 28, 2015

'Proper Good Innit Bruv': A Philosophical Look at Writing Styles



A Pernicious Influence? See Geoffrey Pullum's take


I have a dilemma. Every year I teach students how to write. I teach them how to come up with a thesis for their essay; how to research it; how to make their arguments persuasive; how to arrange those arguments into a coherent structure; and, ultimately, how to form the sentences that convey those arguments. In teaching the last of these skills, I am forced to confront the thorny topic of writing styles. It is then that my dilemma arises.

Style guides commonly present students with lists of prescriptive rules. Following these rules is thought to promote good writing. You are probably familiar such rules. I’m talking about things like ‘Don’t end a sentence with a preposition’, ‘don’t split infinitives’, ‘adopt the active voice’, ‘learn how to use apostrophes, semi-colons, colons (etc) properly’, ‘don’t use words that aren’t really words, such as “irregardless” or “amazeballs”’, ‘the word ‘hopefully’ should not be used as a sentence adjective’, ‘don’t use ‘presently’ to refer to the present moment; only use it to refer to something that will happen in the near future’ and so on. Hopefully, you get the idea; presently, we’ll see the problem. The problem is that I am an extreme non-prescriptivist when it comes to language usage. I don’t believe there is a ‘rule’ against using split infinitives, ending sentences with prepositions, or any of the other commonly listed offences. I fully embrace the non-prescriptivist view of language promoted by the likes of David Crystal, Steven Pinker, Geoffrey K. Pullum, and Oliver Kamm. I think the rules just alluded are really nothing more than (fussy) aesthetic preferences, and that the English language consists in a number of overlapping and equally valid styles.

And yet, when it comes to grading student essays, I often find my inner prescriptivist creeping to the surface. I don’t like it if students use idioms such as ‘It don’t seem no good’ or ‘it was proper good’. I rail against students who misspell words, put punctuation marks in the wrong place, adopt colloquial or slang terms, and generally fail to adhere to the conventions of Standard English. But am I entitled to do this? If there is no hard and fast set of rules to be followed, if English really consists in a number of equally valid styles, how can I complain when my students don’t conform to my preferences? This is my dilemma.

I was recently grappling with this dilemma and it occurred to me that there are some interesting philosophical issues at play. I decided it was possible to justify my quasi-prescriptivist attitude, but to do so I first needed to isolate and understand the competing metaphysical and ethical views of language that underlay my dilemma. Once I did this, I could better explain why a certain amount of prescriptivism is justified. I’m going to try to share the fruits of this analysis in this blogpost.

I am hoping that this doesn’t come across as a rant. But there is a danger that it will since I do find some of the classic rules to be absurd and my frustration with them may well show through.


1. The Metaphysics of Language
Metaphysics is my first port of call. I think the debate about language usage is best understood as a war between two competing metaphysical views of language. That is to say, two competing views of what language is (throughout this discussion I focus on the English language but I presume the same can be said for most other languages), where these in turn dictate particular ethics of language usage (i.e. sets of views on how we ought to speak and write).

The proponents of the two competing views can be given names. I’ll call them sticklers and pragmatists:

Sticklers: Have a legislative or Platonic view of language. They think language consists in rules relating to semantics, grammar and spelling, that are either set down by appropriate authorities (the legislative view) or are intrinsic to the language itself (the Platonic view). This dictates a deontological approach to the ethics of usage. You simply must follow the rules in order to speak or write properly. This is sometimes accompanied by a consequentialist ethic, which is largely focused on conservative values such as preserving a dominant national identity and preventing the pollution of the language by ethnic groups or the lower classes (to be clear: I don’t wish to tar all sticklers with this conservative ethos — it is just that it is sometimes present).

Pragmatists: Have a conventional and evolutionary view of language. They think language consists in a set of constantly shifting and evolving conventions governing semantics, grammar and spelling. This dictates a consequentialist approach to the ethics of usage. This ethic takes different forms, some focusing on achieving a communicative aim and others more political in nature (such as resisting the conservative ethos and celebrating linguistic diversity). Pragmatists can be pure act-consequentialists — that is to say: they can decide which conventions to follow based solely on what is best in a particular communicative act; or they can be more like rule-consequentialists — that is to say: they can follow a set of default conventions because doing so leads to the best outcome on average.

Although I am here imagining that sticklers and pragmatists fall into two distinct ‘camps’, the reality is likely to be more complex. It is more likely that the labels ‘stickler’ and ‘pragmatist’ define a spectrum of possible views. This spectrum filters into the teaching of styles In the diagram below, I illustrate a spectrum of possible learning outcomes for the teaching of writing styles. The spectrum ranges from ‘Stickler Heaven’ at one end, to ‘Pragmatist’s Anarchy’ at the other. I don’t want my students to end up in Stickler Heaven, but I don’t want them to end up in Pragmatist’s Anarchy either. I need to stake out some middle ground (Pragmatist’s Paradise) and explain why they should join me there.



2. Why the Sticklers are Wrong
As a first step toward staking out that middle ground, I need to explain why the stickler’s approach to language is wrong. I do so with two arguments. The first tries to illustrate why the legislative/Platonic conception of language is false (and, contrariwise, why the conventional and evolutionary view is correct). The second tries to argue that adopting the deontic ethic has unwelcome consequences. Of course, if you have fully imbibed the stickler’s deontic Kool Aid, you may be unswayed by such consequentialist reasoning, but I doubt many people will have fully imbibed the Kool Aid. In ethical debates, people often resort to consequentialist reasoning when following a deontic rule would lead to a horrendous outcome. And while I do not promise horrendous outcomes, I think the outcomes to which I appeal will be sufficient to persuade most people that the deontic ethic is inappropriate.

Let’s focus on the first argument: why the legislative/Platonic view of language is wrong. To some, this will simply be obvious: English is not governed by a legislative authority and the rules of language are not like other Platonic entities (say the rules of mathematics). We don’t discover eternal truths about sentence structure and word meaning; the truths, such as they are, are clearly the result of contingent, messy, cultural evolution.* This can be easily demonstrated by focusing on the history of some of the stickler’s favourite so-called rules. These histories illustrate how what the stickler’s take to be ironclad rules are in fact produced by historical accidents. I’ll give a few examples. An excellent source for the historical evolution of usage rules is David Crystal’s book The Fight for English:


Orthographic Conventions: Orthography refers to how words appear on the printed page. Remember, language began as a spoken medium. Words were conveyed through phonemes (i.e. sound structures) not through written symbols. Many words share the same phonemes (i.e. they are pronounced in the same way), even if they have distinct meanings. Listeners are usually able to tell the intended meaning from the context, or by simply asking the speaker follow up questions. Things were different once writing took hold. Conventions had to be adopted so that different meanings could be discriminated. But these conventions emerged gradually and messily. One classic illustration of this concerns the use of the apostrophe. Conventions emerged in which the apostrophe was used to signal an abbreviation (as in ‘don’t’) or possession (as in ‘greengrocer’s’). But these conventions clashed in some cases, most famously in the distinction between it’s and its. The former is an abbreviation of ‘it is’ (or ‘it has’) whereas the latter is a possessive form of ‘it’. There is no logic to this distinction. It is a purely arbitrary compromise that emerged because of the awkward evolutionary history of the apostrophe. In this sense it is akin to biological evolutionary accidents like the laryngeal nerve of the giraffe. I could list numerous other examples of orthographic evolution but I won’t. Just read any book from the 1700 and 1800s and you’ll see how orthographic conventions have changed over the course of relatively recent history.

The Split Infinitive Rule: This is a famous stickler preoccupation. The belief is that it is somehow wrong to say things like ‘to faithfully uphold’ or ‘to boldly go’ because in these cases an adverb (faithfully/boldy) is being used to break-up the infinitive form of a verb (to uphold/to go). Crystal notes that this rule only seems to have entered English grammar books in the 19th century and was an example of Latin reasoning (i.e. the belief that English should copy the conventions of Latin) which has been popular at various times over the history of the English language. In other words, it originated in 1800s as a particular manifestation of a recurrent cultural fad. For some bizarre reason, fealty to the rule lingers and, as Pinker argues, may even have been responsible for Chief Justice John Roberts’s bungled administration of the presidential oath of office to Barack Obama back in 2009. This is bizarre because, as many have pointed out, the English language doesn’t really have an infinitive form of the verb. Instead, it has a subordinate (‘to’) combined with a simple form of the verb (‘uphold/go’): the infinitive is already split. Good writers have routinely and consistently breached the ‘rule’, much to that chagrin of the sticklers. It is odd that some continue to insist upon it.

Concern about ‘Proper Words’: One of strangest of all stickler beliefs is that there is a fixed font of words, and that some so-called words aren’t really words and so shouldn’t be used. Examples include words like ‘irregardless’ or ‘gotten’ (to name but a few). This betrays a misunderstanding of how language works. Nothing illustrates the historical and conventional nature of language more clearly than the passing in and out of existence of new words (read Shakespeare to see some famous examples of this). We need new words to explain new phenomena (‘selfie’, ‘googling’ etc.), and we abandon old words when they are no longer needed. The only standard for whether something counts as a word is whether it is being widely used and has become conventionally understood. So, of course, ‘irregardless’ and ‘gotten’ are words. They are widely used and conventionally understood. You may not like them, but they are words irregardless of what you might like.

The conventionality of language is also illustrated by syntactic rules. In the case of English, it is common to adopt a subject-verb-object order (e.g. ‘John saw the dog’). But in other languages different orders are common. For example, Japanese commonly adopts a subject-object-verb order (i.e., roughly equivalent to ‘John the dog saw’). Both syntactical structures seem ‘normal’ to their relevant communities.

So much for the stickler’s metaphysics. What about their ethics? Even if you accept that language is a messy nest of conventions, you might nevertheless think that we ought to follow certain rules lest we wander into pragmatic anarchy. I agree with this to an extent (as I’ll explain below). It’s probably not a good thing to constantly invent your own new words, or ditch the traditional orthographic rules, but I still think it is a mistake to adopt the deontic attitude of the sticklers. This is for two reasons. First, the rules that are beloved by the sticklers are often barriers to good communication. Second, the deontic attitude seems to encourage an overly moralistic approach to the teaching of style.

There are several examples of how following the stickler’s rules create barriers to good communication. Take the split infinitive rule. Sticklers would have you believe that Captain Kirk should say ‘[boldly to go] or [to go boldly] where no man has gone before’ instead of ‘to boldly go where no man has gone before’. But the latter would seem to preferable to the former. Not just because the phrase has become deeply embedded in the popular psyche, but because the adverb is supposed to modify the verb: it is a particular attitude toward going somewhere that Kirk is invoking. It makes sense to stick the adverb in front of the verb. Similarly, the oft-quoted rule about writing in the active voice can be an impediment to good communication. Use of the active voice directs the reader’s attention to the doer of an action (John kicked the dog) but oftentimes you will want to direct their attention to the done-to (The dog was kicked by John). If you rigidly stick to the rule, you will make your prose more difficult to follow.

As for the moralising attitude, it is present in passages like this (from Lynne Truss’s Eats, Shoots and Leaves):

If the word does not stand for ‘it is’ or ‘it has’ then what you require is ‘its’. This is extremely easy to grasp. Getting your itses mixed up is the greatest solecism in the world of punctuation… If you still persist in writing, ‘Good food at it’s best’, you deserve to be struck by lightning, hacked up on the spot and buried in an unmarked grave.

I know that Truss’s tongue was firmly in cheek when writing this. But similar pronouncement’s are found throughout the work of the sticklers (Oliver Kamm’s book Accidence will Happen aptly illustrates the tendency). And even if this doesn’t always end with hacked-up bodies in unmarked graves, it does seem to end with a sneering condescension towards the idiots who just can’t get it right. I don’t think such an attitude is becoming in an educator.


3. Pragmatic Prescriptivism?
Where does that leave us? It leaves with the pragmatic approach to style. We cannot plausibly conceive of language as a legislative or Platonic phenomenon. We must conceive of it as a conventional and evolutionary phenomenon. What’s more, we must recognise that there isn’t one set of agreed-upon conventions. If there was, we might be warranted in favouring a form of Stickler’s Heaven. But there isn’t. There are, instead, shifting and sometimes competing conventional systems. In certain contexts, it is conventional to use non-Standard spellings and idioms. If you are texting your friends, you can say things like ‘gr8’ or ‘c ya later’ (although, ironically, this seems less common now that there are fewer restrictions on message-length). If you are hanging out with your friends, it might be conventional to say things like ‘proper good innit!’ or ‘I’m well jel!’ or ‘I didn’t do nothing’. But if you are writing an academic essay…

…Here’s where I come back to my dilemma. When writing an academic essay, I think students probably should adopt a fairly traditional, so-called ‘Standard’ style of expression. This means they should probably avoid slang, non-Standard spellings, unusual punctuation and so forth. They should also probably master the different meanings of ‘enormity’, ‘meretricious’ and ‘disinterested’, and learn to put apostrophes in the conventional places. But why should they do this? If there is no right or wrong — if, as Pinker says, when it comes to English the lunatics are literally (or should that be figuratively?) running the asylum — then why can’t they mix and match conventional styles?

This is where the pragmatist's consequentialist ethic kicks-in. I think all pragmatists should adopt the following ‘master’ principle of style:

Pragmatist’s Master Principle of Style: One’s writing (or speaking) style should always be dictated by one’s communicative goals, i.e. what one is trying to achieve and who one is trying to achieve it with.

In the academic context, students are (in effect) trying to impress their teachers. They are trying to show that they understand the concepts and arguments which have been presented in class. They are trying to demonstrate that they have done an adequate amount of reading and research. They are, above all else, trying to defend a persuasive thesis/argument. What’s more, they are trying to do this for someone who isn’t sure that they are capable of it. As I say to my students, ‘you might know what you are talking about, and I might know what you are talking about, but I don’t know that you know what you are talking about — you need to convince me that you do’. The style they adopt should be dictated by those communicative goals.

This means that, in most cases, they should adopt a traditional and Standard style of expression. There are two main reasons for this. First, this is the style that dominates academia and adopting it eases communication. Students have to do a lot to convince me that they know what they are talking about. They won’t help their cause if they adopt countless neologisms and non-Standard idioms. It will put me in a bad mood. I’ll have to work that much harder to understand what they are saying. Second, adopting that style allows students to earn acceptance and respect within the relevant academic community. Certain conventions may be absurd or ridiculous, but it is easier to break them once you have earned respect. Oliver Kamm gives the example of the actress Emma Thompson who urged a teenage audience to avoid overuse of ‘like’ and ‘innit’ ‘Because it makes you sound stupid and you’re not stupid’. This feels right. It is not that students are genuinely stupid for adopteding non-Standard styles; it is that they will be perceived to be so and that, in most cases, is not a good thing. There is a pragmatic case for some forms of linguistic snobbery.

That said, there are no hard-and-fast rules. This is one of the discomfiting features of the pragmatic approach to language. We can’t fall back into the reassuring embrace of ironclad prescriptivism. Some academic styles are maddeningly opaque; it would probably be a good thing to break with their conventions. Sometimes a bit of slang can liven up an otherwise staid piece of prose. Sometimes you have to coin a new word or misappropriate an old one to label a new concept. You have to exercise wisdom and discernment, not blind-faith in a set of rules. This takes time and practice.

I have only one rule: the more you read and write, the easier it becomes.


* As far as I am aware, there may be a Chomskyan linguistic theory that does favour a quasi-Platonic view of language structures. But this arises at a very abstract level, not at the level of particular languages, nor at the level of style. Such Chomskyans would, I am sure, accept that there are many contingent cultural variations in semantics, orthographics and preferred idioms.

Wednesday, August 26, 2015

The Argument from Abandonment and Suffering




(Previous Entry)

The argument from abandonment and suffering is a specific version of the problem of evil. Erik Wielenberg defends the argument in his recent paper ‘The parent-child analogy and the limits of skeptical theism’. That paper makes two distinctive contributions to the literature, one being the defence of the argument from abandonment and suffering, the other being a meta-argument about standards for success in the debate between skeptical theists and proponents of the problem of evil.

I covered the meta-argument in a previous post. It may be worth revisiting that post before reading the remainder of this one. But if you are not willing to revisit that earlier post, allow me to briefly summarise. Skeptical theism is probably the leading contemporary response to the evidential problem of evil. It casts doubt on our ability to identify God-justifying reasons for allowing evil. But skeptical theism is usually formulated in very general terms (e.g. ‘we wouldn’t expect to know all of the possible god-justifying reasons for allowing evils to occur’). Wielenberg’s meta-argument was that it is much more difficult to justify such skepticism in relation to specific instances of evil. In those more specific cases, there may be grounds for thinking that we should be able to identify God-justifying reasons.

And that’s exactly what the argument from suffering and abandonment tries to maintain. The argument builds upon the parent-child analogy, which is often used by theists to justify skeptical theism. So we’ll start by looking at that analogy.


1. The Limitations of the Parent-Child Analogy
I wrote a longish blogpost about the parent-child analogy before. That blogpost was based on the work of Trent Dougherty. Wielenberg effectively adopts Dougherty’s conclusions and applies them to his own argument. So if you want the full picture, read the earlier post about Dougherty’s work. This is just a short summary.

The parent-child analogy is the claim that the relationship between God and human beings is, in certain important respects, very similar to the relationship between a parent and a child. Indeed, the analogy is often explicitly invoked in religious texts and prayers, with their references to ‘God the father’ and the ‘children of God’. Proponents of skeptical theism try to argue that this analogy supports their position. It does so because parents often do things for the benefit of their children without being able to explain or justify this to their children. For example, parents will often bring their infant children for rounds of vaccination. These can be painful, but they are also beneficial. The problem is that the child is too young to have the benefit explained to them. From the child’s perspective, the harm they are suffering is inscrutable. The skeptical theists claim that we could be in a similar position when it comes to the evils that befall us. They may have some greater benefit, but God is simply unable to explain those benefits to us.

Proponents of the problem of evil are often unswayed by this line of reasoning. They accept that, in certain instances, parents are unable to justify what they do to their children, but this is usually a temporary and regrettable phase. When a child grows up and is capable of some understanding, however limited it may be, a loving parent will try to explain why certain seemingly bad things are, ultimately, for the best. Imagine if you had a four year-old child who had to have their leg amputated for some legitimate medical reason. This would, no doubt, cause some anguish to the child. But you, as a loving parent, would do your best to explain to the child why it was necessary, using terms and concepts they can grasp. What’s more, you would definitely not abandon the child in their time of need. You would be there for them. You would try to comfort and assist them.



The result is that parent-child analogy can cut both ways. Skeptical theists and proponents of the problem of evil simply emphasise different features of the analogy. The theists highlight cases in which a parent is unable to explain themselves, but these are extreme cases. In many other cases, there is good reason to think that God (qua parent) would try to explain what he is doing to humanity and would not abandon humans during a time of great suffering and need.


2. The Argument from Abandonment and Suffering
It is precisely those latter features of the parent-child analogy that Wielenberg tries to exploit in his argument. His key observation is that there are cases of seemingly gratuitous suffering that are accompanied by a sense of divine abandonment (i.e. by a feeling that God is no longer there for you and that he may not even exist). He cites two prominent examples of this. One is that of C.S. Lewis, who famously experienced a sense of abandonment after the death of his beloved wife. Lewis wrote about this eloquently:

Meanwhile, where is God? ... [G]o to him when your need is desperate, when all other help is in vain, and what do you find? A door slammed in your face, and a sound of bolting and double bolting on the inside. After that, silence. You may as well turn away. The longer you wait, the more emphatic the silence will become. 
(Lewis 1961, 17-18 - quoted in Wielenberg 2015)

The other example is that of Mother Teresa whose posthumously-published letters revealed that she felt a sense of abandonment throughout most of her life and suffered greatly from this. These examples are interesting because they involve prominent theists. But there are presumably many others who suffer and feel a sense of abandonment without ever recovering their faith. They might be even more potent in the present context.

The combination of such suffering and abandonment is particularly troubling for the theist. There are two reasons for this. One is because it runs contrary to the tenets of the parent-child analogy: the combination of suffering and abandonment is exactly what we would not expect to see if God is like a loving parent. The other is because the sense of abandonment often exacerbates and compounds the suffering. It is precisely because we lose touch with God that we suffer all the more. Again, this is not something we should expect from a loving parent.

To set out the argument in more detail:


  • (1) A loving parent would never permit her child to suffer prolonged, intense, apparently gratuitous suffering combined together with a sense that she has abandoned them (or that she does not exist) unless this was unavoidable.

  • (2) God is relevantly like a loving parent.

  • (3) Therefore, if God exists, he would not allow his creations to suffer prolonged, intense, apparently gratuitous suffering combined together with a sense of abandonment unless this was unavoidable.

  • (4) People do suffer prolonged, intense, apparently gratuitous suffering combined with a sense of abandonment.

  • (5) God, if he exists, should be able to avoid this.

  • (6) Therefore, God does not exist.



This version of the argument is slightly different from the version that appears in Wielenberg’s article. The main difference being that I have divided his premise (4) into two separate premises (4) and (5). I did this because I wanted to highlight how the avoidability of the suffering and abandonment is an important component in the argument, and something that a clever skeptical theist might try to dispute (as we shall see in a minute).

On the whole, I think this is a strong atheological argument. I think the combination of suffering and abandonment is potent. And I think the traditional forms of skeptical theism are ill-equipped to deal with it. Wielenberg points out that because the argument is analogical it doesn’t really rely on an explicit noseeum inference. But even if you were to translate it into a form that did rely upon such an inference, the inference would be specific and justified. The whole point here is that we should not expect to see the combination of seemingly gratuitous suffering and abandonment if God exists. This is true even if there are goods and evils (and entailment relations between the two) that are beyond our ken. The result is that the likely explanation for the combination of seemingly gratuitous suffering and abandonment is that these cases involve actually gratuitous suffering, and this in turn is incompatible with the existence of God.


3. The Possibility of Positive Skeptical Theism
There is one potential response to the preceding argument. The response is implicit in the extant literature. This is DePoe’s positive skeptical theism. This version of skeptical theism differs from the others in that it doesn’t appeal to the mere likelihood of beyond-our-ken reasons for God’s allowing evil to occur. Instead, it argues that there are positive justifications for God’s creating epistemic distance between us and him.

DePoe’s position is thus slightly more theodical than skeptical. It builds upon theodical work done by the likes of Richard Swinburne and John Hick by arguing that there are reasons to expect a world in which God makes his existence uncertain. The reasons have to do with specific goods that are only possible if God’s existence is uncertain. For DePoe, there are two such goods worthy of particular consideration: (i) the possibility of a genuine loving response to God in faith; and (ii) the possibility certain acts of supreme human love and compassion (I seem to recall Swinburne arguing that genuine moral responsibility was only possible in a world with some epistemic distance). I would tend to question whether these are truly good (and not simply ad hoc responses to the problem of evil) and whether the goodness is sufficient to justify the kinds of evils we see in the world, but I will set those worries to the side.

The important point here is that if positive skeptical theism is correct it has the potential to undermine the argument from suffering and abandonment. Where Wielenberg suggested that the combination of suffering and abandonment is exactly what we would not expect to see if God exists, DePoe is saying that this is something we should expect to see. Thus, God may not be able to avoid suffering and abandonment if he wants to realise the (greater?) goods alluded to by DePoe.

Wielenberg argues that this is an unpromising line of response. The reason is that DePoe’s positive skeptical theism opens up the problem of divine deception. The argument here is a little bit tricky so I’ll try to set it out carefully. It starts with an assumption:

Assumption: There cannot be any actually gratuitous evils — they are incompatible with God’s nature.

This is an assumption we have been working with throughout this post and it is one that DePoe and many other theists accept. It creates a problem in the present context because, as was argued above, there do nevertheless appear to us to be cases in which evil is seemingly gratuitous. This means that DePoe must be committed to the following:

DePoe’s Commitments: God must have created the world in such a way that (a) there are no actually gratuitous evils but (b) there are many specific instances of evil that appear to us to be gratuitous.

This, in turn, implies:

DePoe must believe in a world in which God has arranged things so as to systematically mislead us as to the true nature of good and evil (i.e. as to what is actually gratuitous evil and what is not)

DePoe’s God is a deceptive god: He achieves the necessary epistemic distance by deceiving us as to the true nature of good and evil.

This is problematic. For one thing, the notion of a deceptive god may be incompatible with certain conceptions of divine moral perfection (viz. a perfect being cannot be deceptive). For another, once you accept that God is deceptive in one domain it becomes more likely that he is deceptive in others. This may undercut the warrant that a religious believer has in certain sources of divine revelation. It is unlikely that many theists will be willing to pay that cost.




4. Conclusion
In sum, the argument from abandonment and suffering is a particularly strong version of the problem of evil. It highlights cases in which people suffer great harms and experience the absence of God. This is something we should not expect to see if God is like a loving parent. Would a loving parent really abandon her children (or cause them to believe in such abandonment) after they have suffered some great harm? Surely not. Yet God seems to do so repeatedly. Traditional versions of skeptical theism are ill-equipped to deal with this argument because in this case the noseeum inference is being explicitly justified. DePoe’s positive skeptical theism might proffer a response, but it does so at the cost of believing that God is a systematic deceiver.

Tuesday, August 25, 2015

On the Limitations of General Skeptical Theism


Erik Wielenberg


Erik Wielenberg has just published a great little paper on skeptical theism and the problem of evil. I don’t mean to use the word ‘little’ in a pejorative sense. Quite the contrary. I use that descriptor because the paper manages to pack quite a punch into a relatively short space (a mere 12 pages of text). The ‘punch’ consists of two interesting arguments. The first is a meta-argument about standards of success in the debate between skeptical theists and proponents of the problem of evil. The second is a strengthened version of the problem of evil, which focuses specifically on the problem of suffering and abandonment.

The second argument is the real centrepiece of the article and I will cover it in a future post. Today, I want to deal with the meta-argument. I do so because it sets the stage for the argument from suffering and abandonment, and because it is an interesting methodological point in its own right. I won’t delay any further; I’ll get straight into it.


1. The Problem of Evil and the Noseeum Inference
Everyone is familiar with the problem of evil. They all know that God is supposed to be a maximally powerful, maximally knowledgeable, and perfectly good being. They also know that there are many real world instances of evil. This evil can take many forms, with the most commonly lamentable form being the suffering of conscious creatures. The problem of evil simply points to the difficulty of reconciling the existence of such suffering with the existence of God.

Of course, it’s a little bit more complicated than that, and I don’t want to completely rehash the centuries-long debate about the problem of evil here. Instead, I want to hone-in on its most popular modern form. Back in 1979, the (sadly) recently-deceased William Rowe published an influential article entitled ‘The Problem of Evil and Some Varieties of Atheism’. In it, he presented an evidential version of the problem of evil, which has become the most widely-discussed contemporary variant on the problem.

Rowe’s argument appealed to the concept of gratuitous evil. This is a type of evil that is not logically or metaphysically necessary for some greater good. In other words, it is a type of evil that a perfectly good being could not permit. This is widely accepted by both theist and atheist alike. What is disputed is whether there are any actual instances of gratuitous evil. Rowe tried to argue that there are. He did this by highlighting examples of real-world suffering that don’t seem (in light of everything we know) to have any God-justifying reason for their existence. His famous example of such evil is a fawn who suffers horribly in a forest fire, with no one around to help or learn from the experience. He argues that we can infer the likely existence of actually gratuitous evils from the existence of such seemingly gratuitous evils.

To put it more formally, Rowe’s version of the problem of evil takes (roughly) the following form:


  • (1) If there are any actually gratuitous evils, then God does not exist.
  • (2) There are seemingly gratuitous evils.
  • (3) We can warrantedly infer the likely existence of actually gratuitous evils from the existence of seemingly gratuitous evils.
  • (4) Therefore, God is unlikely to exist.


The critical premise here is (3). As Rowe’s critics point out, this premise relies upon a ‘noseeum’ inference. In other words, it relies upon the assumption that if there were God-justifying reasons for allowing some evil we could expect to see them. This is something skeptical theists take issue with. The question is whether they are right to do so. To figure this out we need to consider their position in a little bit more detail.


2. Skeptical Theism and the Noseeum Inference
As Wielenberg points out, skeptical theism has two components: (i) a theistic component and (ii) a skeptical component. The theistic component is relatively straightforward. It consists in either the belief in God as classically understood (i.e. as a perfect being), or as understood by holders of some particular faith. Wielenberg uses a specifically Christian version of theism in his analysis because that is the version held by those toward whom he directs his arguments.

The skeptical component is slightly more complicated. The general gist of it is that we should be skeptical of our ability to fully know what God knows and that this skepticism undercuts the noseeum inference at the heart of Rowe’s argument. A number of more specific conceptualisations of the skepticism have been offered over the years. There is, for example, William Alston’s version, which focuses on different parameters of cognitive limitation that seem to apply to humans; and there is Michael Bergmann’s version which focuses specifically on the representativeness of our knowledge of good and evil and the entailment relations between the two.

Wielenberg doesn’t weigh the pros and cons of these different conceptualisations. Instead, he suggests the following as a version of skeptical theism that captures the core idea and does justice to some of the leading conceptualisations (most particularly the Bergmannian form):

SC1: It would not be surprising if there are possible goods, evils, and entailments between good and evil that are beyond our ken (but not beyond the ken of an omniscient God).

Skeptical theists think that a principle like SC1 is sufficient to undermine Rowe’s argument from evil. Are they right to do so? Here’s where Wielenberg’s meta-argument enters the fray.


3. The Need to Distinguish between General and Specific Noseeum Inferences
Wielenberg’s argument is that, to date, participants in the debate about skeptical theism and Rowe’s argument have paid insufficient attention to the difference between general and specific versions of the evidential problem of evil. The failure to do so means that the ability of skeptical theism to undercut the problem of evil is overrated, at least when that view is proffered in response to more specific versions of the problem.

Allow me to explain. The general and specific versions of the evidential problem work like this:

General Evidential Arguments: There are many instances of seemingly gratuitous evil; therefore there are probably some instances of actually gratuitous evil; therefore God does not exist.

Specific Evidential Argument: Specific instance of evil E is seemingly gratuitous; therefore E is probably actually gratuitous; therefore God does not exist.

To put it another way, general evidential arguments say ‘Look, there are all these instances of evil that seem to be gratuitous. They cannot all be necessary for some greater good. Therefore, it is likely that at least one of them is actually gratuitous.’ And specific arguments say ‘Look, there is this specific instance of evil. We have tried really hard and we cannot come up with a God-justifying reason for allowing this evil. Therefore, it is likely that this specific instance of evil is gratuitous.’
These argumentative forms rely on different noseeum inferences:

General Noseeum inference: Moves from the existence of some seemingly gratuitous evils to the existence of at least one actually gratuitous evil.

Specific Noseeum inference: Moves from the seemingly gratuitous nature of E to its actually gratuitous nature.

The differences are crucial because it is much easier to be skeptical about general noseeum inferences than it is to be skeptical about specific ones. The general noseeum inference confidently assumes we should be able to ‘see’ god-justifying reasons for allowing evil wherever they may arise. A principle like SC1 successfully undermines such confidence. But the specific noseeum inference does not share this feature. It assumes merely that we should be able to see god-justifying reasons in some particular case. A principle like SC1 cannot undermine our confidence in inferring from that particular case.

This can be demonstrated more formally. Let’s take Rowe’s case of the fawn suffering in the forest fire as an example of a specific evidential argument from evil. It fits the bill because it points to one particular instance of evil and makes inferences about its likely gratuitous nature (Wielenberg calls this the ‘Bambi’ Argument). Now consider the following two variations on skeptical theism. The first is SC1, which we already had, and the second is SC1a which is a more detailed variant on SC1:

SC1: It would not be surprising if there are possible goods, evils, and entailments between good and evil that are beyond our ken (but not beyond the ken of an omniscient God).

SC1a: It would not be surprising if there are possible goods, evils, and entailments between good and evil that are beyond the ken of human beings (but not beyond the ken of an omniscient God) but it would be surprising if any such possible goods, evils, or entailments had anything to do with fawns.

SC1 and SC1a are logically compatible. SC1 is a general and vague type of skepticism; it doesn’t rule out the possibility of sound moral knowledge in particular cases (indeed, that possibility is something skeptical theists need to preserve if they are to avoid other problems with their position). SC1a is merely adding to SC1 a specific case in which we can expect to have pretty sound moral knowledge.

And here’s the critical point: because SC1 and SC1a are logically compatible, SC1 cannot by itself undermine Rowe’s specific evidential argument from evil. If a proponent of SC1 tried to challenge the argument, they could always be rebuffed on the grounds that SC1a (which is consistent with their general skepticism) does not undermine the argument. This is illustrated below.




In other words, to defeat a specific version of the evidential problem you need to have a specific version of skeptical theism — one that accounts for our inability to make warranted inferences about the likely gratuitous nature of some specific type of evil. You cannot simply fall back on general formulations of skeptical theism.

That’s Wielenberg’s meta-argument and he tries to leverage it to his advantage in formulating the argument from abandonment and suffering. I’ll talk about that some other time.

Monday, August 24, 2015

Is God the source of meaning in life? Four Critical Arguments




(Previous Entry)

Theists sometimes argue that God’s existence is essential for meaning in life. In a quote that I have used far too often over the years, William Lane Craig puts it rather bluntly:

If there is no God, then man and the universe are doomed. Like prisoners condemned to death, we await our unavoidable execution. There is no God, and there is no immortality. And what is the consequence of this? It means that life itself is absurd. It means that the life we have is without ultimate significance, value or purpose. 
(Craig 2007, 72)

It is clear from this that, for Craig, God is essential for meaning. Without him our lives are absurd. Is this view of the relationship between God and meaning correct? Is God the source of meaning in life? Or could our lives have meaning in His absence?

In the previous entry in this series, I looked at Megill and Linford’s recent argument about the relationship between God and meaning. To recap, they argued that God’s existence is sufficient for meaning in life. This is because God, being omnibenevolent and omnipotent, would not create beings with meaningless lives. To do otherwise would be to create a sub-optimal world in which people are susceptible to gratuitous suffering, and since it is widely-accepted that gratuitous suffering is incompatible with the existence of God it cannot be the case that He would create such a world. Megill and Linford also argued that this conclusion could be used to craft a novel argument for atheism, viz. if there is at least one meaningless life, then God does not exist.

This is an interesting and provocative argument, and it clearly suggests that God might be important for meaning. But it does not vindicate Craig’s position. It shows that God’s existence is sufficient for meaning; it does not show that God is necessary for meaning (i.e. that God is the source of meaning). This is an important distinction. If there is no necessary relationship between God and meaning, then it is possible to have a purely secular theory of meaning. And if it is possible to have a purely secular theory of meaning, then it is also possible for their novel argument for atheism to work (as I explained at the end of the last post).

The second half of Megill and Linford’s paper is dedicated to defending the view that God is not the source of meaning in life. They present four different arguments in support of this view. I want to look at each of them in the remainder of this post. I have given these arguments names, but be warned, in case you want to read their original paper, the names are my own invention. One other forewarning: the claim that God is sufficient for meaning is taken for granted in the following discussion. This does have an effect on the plausibility of some of what follows, though this will be flagged when appropriate.


1. The Possible Worlds Argument
The first argument asks us to imagine two different possible worlds:

G: A world in which God definitely exists and which is a perfect duplicate of the actual world.
NG: A world in which God definitely does not exist and which is a perfect duplicate of the actual world.

Both of these worlds are identical in terms of the lives that pass in and out of existence; the events that take place; and the outcomes that are achieved. The only difference is that God exists in G but not in NG. Linford and Megill suggest that both worlds are epistemically possible, i.e. for all we know we could be living in G or NG. What effect does this have on the meaning of our lives?

If we live in G, then our lives definitely have meaning. This follows from the argument in part one: if God exists, he would not allow us to live meaningless lives. That’s obvious enough. What if we live in NG? Well, then it depends on whether God is necessary for meaning or not. If he is necessary for meaning (i.e. if he is the source of meaning) then our lives in NG are meaningless. But if he is not necessary, then there is some hope (it depends on what the other potential sources of meaning are).

Let’s assume for now that God is necessary for meaning. This forces us to conclude that our lives in NG are meaningless. Is that a plausible conclusion? Megill and Linford argue that it is not. If it were true, then it would also follow that the actual content of our lives had no bearing on their meaningfulness. Remember, our lives are identical in G and NG; the only difference is that God exists in one and not in the other. But surely it is implausible to conclude that what we do (the actions we perform, the events we participate in etc) have no bearing on the meaningfulness of our lives? This gives us the following argument:



  • (1) Imagine two (epistemically) possible worlds: G and NG. God exists in G and not in NG, but otherwise both worlds are identical to the actual world in which we live. Thus, the content of our lives is the same in G and NG.

  • (2) If God is necessary for meaning, then our lives are meaningless in NG; if God is sufficient for meaning, then our lives meaningful in G.

  • (3) Therefore, if God is necessary for meaning, the actual content of our lives has no bearing on whether or not they are meaningful (from 1 and 2).

  • (4) It is implausible to assume that the content of our lives has no bearing on their meaningfulness.

  • (5) Therefore, God must not be necessary for meaning.



For what it’s worth, the basic gist of the argument being made here — that if God is necessary and sufficient for meaning then what humans do with their lives would make no difference — has been exploited by others in the recent past. Still, the argument can be challenged from several angles. The obvious line of attack is to take issue with premise (1). That premise assumes that our lives really could be identical in G and NG, but surely that is false? Surely, if God exists, his existence would have to make some difference to the content or shape of our lives?

Megill and Linford consider two versions of this response. The first appeals to a necessarily interventionist God:


  • (6) Objection: God is necessarily interventionist, i.e. he changes the course of events in the world. Consequently, G and NG could not be identical.


Megill and Linford respond to this by defending a narrower version of premise (1). They concede that God could intervene in some people’s lives, but point out that it is accepted (by ‘most’ theists) that there are at least some individual lives that aren’t affected by divine intervention. Those lives would be identical across both G and NG and the argument could still go through for the people living those lives. Similarly, if the claim is that God’s intervention is itself necessary for meaning, you run into the problem that God does not intervene in all lives. That means that those lives will lack meaning, which is inconsistent with the argument presented in part one (i.e. that if God exists, all lives must have meaning).


  • (7) God does not intervene in all lives hence those lives could be identical across G and NG; furthermore, if such intervention is necessary for meaning, you run into the problem that lives in which God does not intervene would be meaningless, which is inconsistent with the claim that God is sufficient for meaning.


Some of that seems plausible to me but I wonder whether a theist could wiggle out of it by insisting that God does intervene (in some minimal way) in every life (e.g. through creation or at the end of life). Some people may not appreciate it or be aware of it, but that doesn’t matter: his minimal intervention is still the secret sauce that saves us from meaninglessness.

The other version of the objection focuses on the afterlife:


  • (8) Objection: G and NG are not identical because the afterlife would exist in G and the afterlife is what confers meaning on our lives.


This is certainly a popular view among theists. The earlier quote from Craig made a direct appeal to the importance of immortality in our account of meaning. Megill and Linford offer two responses. The first is to argue that an afterlife is epistemically possible on atheism. In other words, there is at least one epistemically possible atheistic universe in which humans live forever. So God isn’t necessary for immortality. The other response is to argue against the notion that immortality is necessary for meaning. They do this by appealing to the fact that some events of finite duration appear to have value, and that sometimes the value that they appear to have is a direct function of their brevity. They give the example of one’s days as an undergraduate student, which are probably more fondly remembered because they don’t last forever. They could also give the example of lives that go on forever but seem to epitomise meaninglessness, e.g. the life of Sisyphus.



  • (9) It is epistemically possible for their to be an afterlife in NG; and it is unlikely that immortality is itself necessary for meaning.



I suspect theists might respond by agreeing that immortality simpliciter is not necessary for meaning. What is necessary is the right kind of immortality and God provides for that kind of immortality (e.g. through everlasting life in paradise). In doing this, theists are making appeals to some feature or property that God manages to bestow on our lives to make them meaningful. To help us distinguish such claims, Megill and Linford appeal to something they call the fourfold distinction:


The Fourfold Distinction: When discussing the overarching ‘meaningfulness’ of our lives, it is worth distinguishing between four phenomena:
(i) The significance we attribute to our own lives;
(ii) The purpose to which we devote our lives;
(iii) The significance God attributes to our lives;
(iv) The purpose for which God created us.


The theist might concede that life in NG could have (i) and (ii), but it could never have (iii) and (iv). They are what make the crucial difference. They come from outside our own lives and confer meaning upon us. The other arguments presented by Megill and Linford try to deal with these sorts of claims.


2. The External Source Argument
The next argument is something I am dubbing the external source argument. It works like a dilemma involving a disjunctive premise (i.e. a premise of the form ‘either a or b’). The disjunctive premise concerns the possible sources of meaning in life. Megill and Linford suggest that there are only two possibilities: (a) the source is intrinsic/internal to our individual lives, i.e. human life is meaningful in and of itself; or (b) the source is extrinsic/external to our lives, i.e. what we do and how that relates to some other feature of the universe is what determines meaningfulness. The problem is that neither of these possibilities is consistent with God being the source of meaning.

The full argument works a little something like this:



  • (10) If life has meaning, then that meaning is either intrinsic/internal to life or extrinsic/external (i.e. dependent on what we do and how that relates to something external to us).

  • (11) If the meaning is intrinsic/internal to life, then God is not the source of meaning.

  • (12) If the meaning is extrinsic/external, then God might be the source of meaning (though that depends on what else we know about meaning and God’s relationship to it).

  • (13) We know that if God exists, then every life must have meaning (the sufficiency argument - from the previous post).

  • (14) Therefore, we know that if God exists, every life must have meaning irrespective of how that life is lived and how the person living it relates to God (from 13 and previous discussion).

  • (15) Therefore, God cannot be the external source of meaning.

  • (16) Therefore, either way, God cannot be the source of meaning in life.



This formalisation is my attempt to make sense of the argument presented in Megill and Linford’s article. The first three premises should be relatively uncontroversial. The argument does not assume that life has meaning, merely that if it does, the meaning must be internal or external. It is pretty obvious that internal meaning excludes God as the source. That just leaves the external possibility. The problem is that the sufficiency argument seems to suggest that how we live our lives makes no difference to their meaning, which in turn seems to rule out the claim that how we relate to God (or how he relates to us) is what infuses our lives with meaning.

So far, this is very similar to the previous argument. The chief difference comes when Megill and Linford develop the argument by the considering fourfold possibilities: (i) that the purpose to which we devote our lives matches the purpose for which God created us; (ii) that the purpose to which we devote our lives does not match the purpose for which God created us; (iii) that the significance we attribute to our lives matches the significance God attributes to us; or (iv) that the significance we attribute to our lives does not match the significance God attributes to us. They argue that none of these possibilities is consistent with God being the source of meaning.

I’ll briefly summarise their reasoning. Suppose (i) is true: our purpose matches God’s purpose for our lives. There are two problems with this. First, it is not clear how one being creating us for a purpose necessarily makes our lives meaningful. When we consider analogous cases (e.g. a scientist creating a child for the purpose of organ donation) we often find something lamentable or problematic about the life in question. We think it robs us of proper autonomy and choice. At the very least, it would seem to depend on the nature of the purpose and not on the mere fact that another being has created us for a purpose. Second, we have the NG problem, outlined in the previous argument. We could imagine two worlds (G and NG) in which we live for identical purposes, albeit in one of these world’s God does not exist. Does this rob us of something important? Megill and Linford suggest that it does not: if our lives are directed toward the same end, they should be equally valuable. I suspect a theist would challenge this on the grounds that there are certain divine purposes that simply would not be possible in NG.

Suppose (ii) is true: our purposes don’t match. If that’s the case, then it seems like God would have created a particularly odd world. If he is rational, then he would want to accomplish his goals through his actions. And if he is truly omnipotent and ominscient, then surely he would not fail to create beings that matched his goals?

Suppose (iii) is true: we attribute the same level of significance to our lives as God does. In that case, Megill and Linford think that we once again have the G vs NG problem: “we would attribute the same importance to our lives regardless of whether we lived in G or NG. Therefore it is difficult to see what difference God would make in this scenario.” (Megill and Linford, 2015).

Finally, suppose (iv) is true: there is a mismatch in the level of significance we attach to our lives. There are then two possible mismatches. Either we attribute more significance than God or less. If we attribute more, then Megill and Linford argue ‘our lives would be imbued with a deep sense of importance (even if inappropriate) in both G and NG. So it is difficult to see why would need to be in G as opposed to NG for our lives to have meaning.’ (Megill and Linford 2015) And if we attribute less meaning, then we are confronted with a variant on the problem of evil: people would be made to suffer needlessly by thinking that their lives were less important than they actually are.

I have my problems with all of this. While I agree with the insight at the heart of the argument (if God exists, then what we do will make no difference to the ultimate meaning/significance of the universe), I think Megill and Linford do a poor job showing that God cannot be an external source of meaning. One reason for this is that they don’t spend enough time distinguishing between the different concepts (i.e. purpose, meaning, significance); another is that many of the points made here simply rehash or repeat points that have already been made in their article. The main reason, however, is that throughout this section of their paper they seem to assume a largely subjectivist standard of success for their argument. In other words, they assume that if we think our lives have meaning (or significance or purpose or whatever) then that’s good enough. This certainly seems to be the assumption at play in the two quoted passages in the two preceding paragraphs. In both instances, Megill and Linford rule out the importance of God on the grounds that if we attribute a high level of significance to our own lives, they must have that level of significance. They don’t seem to countenance the view that our subjective beliefs might be wrong.

This is problematic because it is then all too easy for a theist to take advantage of the distinction between objective and subjective standards of success. The theist could argue that, irrespective of what we think about the purpose or significance of our lives, what matters is that there is an objective standard for these things. They could bolster this argument by pointing to secular philosophers who have argued for similar views. And then they could argue that God is the only thing that could possibly provide the appropriate objective standard. In this sense, they could argue that the debate is very similar to that about God’s role in grounding objective moral truths. The problem with Megill and Linford's argument is that it too readily assumes the presence of meaning/significance when we subjectively perceive it to exist.

Now, don’t get me wrong: I think there is plenty wrong with the claim that God is the only thing that could ground the appropriate objective standard. I have tried to explain why I think that in several previous posts. I just don’t think that this particular argument, one of four in Megill and Linford’s article, is making the best case for this view.


3. The No-Belief Argument
I’ll try to deal with the two remaining arguments more quickly. The first of these focuses on the role of theistic belief in any theistic account of meaning. I’m calling it the ‘no-belief’ argument because it highlights the potential irrelevance of belief in God for meaning, which is then alleged to be disturbing for the theist.

The argument starts with the supposition that God is necessary for meaning, i.e. that He is an external source of meaning in our lives. This means that we must stand in some sort of relation to God in order for our lives to have meaning. That relation could take many different forms. It could be that we have to achieve salvation with God in the afterlife. It could be that we need to follow a specific list of divine commandments. The precise details of the relation do not matter too much. What matters is whether belief in God is going to be an essential part of that relation. In other words, on the theistic account, is it the case that we must believe in God in order for our lives to have meaning?

You might argue that it is. If you are a theist, you would like to think that your belief makes some kind of a difference. But in that case you run into a version of the problem of divine hiddenness. There are some people who are blameless non-believers either because they were raised in a time and place where belief in God was not available to them, or because they have honestly tried to believe and lost their faith. Either way, if you think belief is necessary for meaning, it would follow that these people are living meaningless lives. This is incompatible with the sufficiency argument outlined in part one. Recall the conclusion to that argument: if God exists, all lives must have meaning. It follows therefore that belief in God cannot be necessary for meaning.

But then the theist is in the rather odd position of believing that God is necessary for meaning but belief in Him is not. This is certainly an odd view of meaning for people like William Lane Craig, who insist that achieving salvation through a personal relationship with God is the ultimate source of meaning and purpose. And it would probably be uncomfortable for many other theists.

My feeling is that although theist would be uncomfortable with this idea, this argument once again fails to really upset the view that God is a necessary, external source of meaning. I feel like a theist could bite the bullet on this one and accept that belief in God is not important, but continue to maintain that something else about God is important (e.g. that he will save us all in the end, irrespective of belief). I’ve certainly conversed with a number of liberal, universalist-style Christians who embrace this idea. Their views about God and meaning are often maddeningly vague, but they aren’t quite susceptible to this objection.


4. The New Euthyphro Argument
The final argument is a variation on the Euthyphro dilemma. As you probably know, the Euthyphro dilemma is a famous objection to theistically-grounded views of morality, such as Divine Command Theory. It is named after a Platonic dialogue. The dilemma poses the following challenge to the proponent of divine command theory: for any X (where is an allegedly moral act) is X morally right because it is commanded by God, or is it commanded by God because it is morally right? If it is the former, then it seems like the goodness of X is purely arbitrary (God could have commanded something else). If it is the latter, then it seems like God is not the true ontological foundation for the obligation to X. This is independent from God. Neither of these conclusions is entirely welcome.

Megill and Linford argue that a similar dilemma can be posed about the relationship between God and meaning. To anyone who claims that God’s existence is necessary for meaning, we can pose the following question: do our lives have meaning simply because God decrees that they do, or does God choose his decrees based on some independent standard of meaningfulness? To make this more concrete, suppose we accept the view that meaning is provided by God’s plan of salvation. We then ask: is this meaningful simply because it is God’s plan, or is it God’s plan because it is independently meaningful? If it’s the former, then we run into the problem that God could have picked any plan at all and this would have made our lives meaningful. For instance, God could have decided that rolling a boulder up and down a hill for eternity provided us with meaning. That doesn’t seem right. If it’s the latter, then we run into the problem that God is not the true source of meaning. It is an independent set of properties or values.

Megill and Linford develop this argument in more detail by asking whether any of the responses to the traditional Euthyphro dilemma can apply to this novel version. I won’t get into these details here because I have explored those responses before and I think they are equally implausible in this context. In other words, I think this argument basically correct. God cannot be the source of meaning and because meanings (like other values) are most plausibly understood as basic, sui generis and metaphysically necessary properties of certain states of affairs. I have defended this view on previous occasions.


5. Conclusion
This post has been quite long. Much longer than I originally anticipated. To briefly recap, the question was whether God was necessary for meaning. To be more precise, the question was whether God was the source or grounding for meaning in life. Megill and Linford presented four arguments for thinking that He could not be. My feeling is that only two of these arguments are really worthy of consideration: (i) the possible worlds argument, which is based on a thought experiment about different epistemically possible worlds; and (ii) the new Euthyphro argument, which is based on the classic Euthyphro objection to divine command theory. The other two arguments strike me a being more problematic.

Saturday, August 22, 2015

Assessing the Philosophical Apologetics of William Lane Craig (Series Index)





Love him or loathe him, William Lane Craig is probably the most popular and successful of the modern philosophical apologists. And even though we view the world in very different ways, I have to admit that I find something admirable about the guy. His scholarly credentials are pretty impressive; he has published a long list of academic books and peer-reviewed articles; his debate performances are polished and precise; and his appetite for philosophically-inclined defences of the Christian faith is seemingly insatiable. There are things to dislike about him too, for sure, but I'm not in the business of disliking people so I won't get into that here.

Over the years, I have written a number of posts assessing various aspects of Craig's apologetical programme. These assessments have never been comprehensive. Those who are familiar with Craig's work, will know that he usually mounts a five-part defence of his faith: (i) an epistemological defence, based on the testimony of the Holy Spirit; (ii) a cosmological defence, based on his re-working of the Kalam Cosmological Argument; (iii) a teleological defence, based on the fine-tuning argument (though this is never particularly well-developed); (iv) a moral defence, based on both modified Divine Command Theory and claims about the lack of value and meaning in an atheistic universe; and (v) a historical defence, based on his argument for the historicity of Jesus' resurrection.

I've only dipped into two parts of this apologetical programme in my writings. First, I have examined various critiques of the Kalam cosmological argument, focusing in particular on the philosophical (as opposed to scientific) aspects of the argument. Second, I have considered challenges to Craig's views about the relationship between God and morality, and between God and meaning in life. This post brings together all my writings on these topics. I have divided into three main sections, deciding to treat morality and meaning as separate topics. I will use this for future updates.


1. The Kalam Cosmological Argument

The Kalam Cosmological Argument is very simple. It claims that (i) everything that begins to exist must have a cause of its existence; (ii) the universe began to exist; and (iii) therefore, the universe must have a cause of its existence. It then goes on to argue that this cause must be an immaterial, atemporal and personal being (i.e. God). I have looked at challenges to all aspects of this argument. Here is a complete set of links.


  • Must the Beginning of the Universe have a Personal Cause? This four-part series of posts looked at an article by Wes Morriston, who is probably the foremost critic of the Kalam. This article challenged the claim that everything that begins must have a cause of its existence and that the cause must be immaterial and personal in nature. This series appeared on the blog Common Sense Atheism (when it was still running), so the links given below will take you there:

  • Schieber's Objection to the Kalam Cosmological Argument - Justin Schieber is one of the former co-hosts of the Reasonable Doubts podcast, and a prominent atheist debater. Back in 2011 he offered an interesting critique of the Kalam argument. Briefly, he cast doubt on Craig's claim that God could have brought the universe into existence with a timeless intention. I tried to analyse and formalise this critique in one blog post:

  • Hedrick on Hilbert's Hotel and the Actual Infinite - The second premise of the Kalam is often defended by claiming that the past cannot be an actual infinite because the existence of an actual infinite leads to certain contradictions and absurdities. This is probably the most philosophically interesting aspect of the Kalam argument. One of the thought experiments Craig uses to support the argument is Hilbert's Hotel. In this series of posts, I look at Landon Hedrick's criticisms of this thought experiment.

  • Craig and the Argument from Successive Addition - Even if the existence of an actual infinite is not completely absurd, Craig argues that it would still be impossible to form an actual infinite by successive addition. This is his second major philosophical argument in defence of premise (2) of the Kalam. In this post, I look as Wes Morriston's criticisms of this argument: 

  • Puryear on Finitism and the Beginning of the Universe - This post looked at Stephen Puryear's recent(ish), novel, objection to the Kalam. It is difficult to explain in a summary format, but suffice to say it provides an interesting, and refreshing, perspective on the debate: 


2. The Moral Argument

Craig claims that the moral argument is the most apologetically useful one. That is to say, it is the one that is most deeply felt by would-be theists. Most people want there to be cold hard objective moral facts. They fear that in a world without God there would be no such facts. The moral argument plays upon these fears. Craig has formulated the argument in different ways over the years. Roughly, it works like this: (i) if God does not exist, objective moral facts cannot exist; (ii) objective moral facts exist; (iii) therefore, God exists. (Some people worry about the logic of this but it is fine: it includes a suppressed double negation: not-not Q implies not-not P; therefore P). 

I'm not sure that the moral argument is all that interesting from a philosophical perspective. But I think the alleged relationship between God and moral facts is. As are the more general metaethical questions raised by the argument. I've explored this many of my writings. The one's listed below are only those that specifically invoke Craig's work:


  • Must Goodness be Independent of God? - This was a short series of posts about Wes Morriston's article of the same title. The series looked Craig and Alston's solutions to the Euthyphro dilemma. This was one of my early attempts to get to grips with this topic, and is thus probably surpassed by some of my later efforts.

  • Some thoughts on theological voluntarism - This was a post I wrote in response to the Craig-Harris debate way back in Spring 2011. Although prompted by that debate, the post tried to give a decent introduction to theological voluntarism and to highlight a possibly neglected critique of that view, one that Harris could have used in the debate. I have written about this critique subsequently, though never focusing specifically on Craig's work.

  • Craig on 'Objective' Moral Facts -  Craig repeatedly appeals to the notion that there are objective moral values and duties. But what exactly does he mean by saying they are "objective"? This series subjects the crucial passages in Reasonable Faith to a close textual and philosophical analysis. It also suggests a general methodology for determining the merits of any metaethical theory. 

  • God and the Ontological Foundation of Morality - Craig insists that only God can provide a sound ontological foundation for objective moral values and duties. But what would that mean and is it right? With the help of Wes Morriston (once again) I try to answer that question in the negative.

  • Divine Command Theory and the Moral Metre Stick - In his efforts to avoid the Euthyphro dilemma, Craig sometimes relies on William Alston's analogy of the metre stick. According to this analogy, God stands in the same relation to the "Good" as the model metre stick stands in relation to the length "one metre". Does that make any sense? Jeremy Koons argues that it doesn't and in this series of posts I walk through the various steps of Koons's argument.

  • Is Craig's Defence of the DCT Consistent? - Erik Wielenberg has argued that Craig's defence of the DCT is fatally inconsistent. This series looks at Wielenberg's arguments, but also goes beyond them in certain important respects by trying to address the deeper metaphysical reasons for the inconsistency. 

  • Craig and the 'Nothing But' Argument - Craig sometimes argues that on the atheistic view humans are nothing but mere animals or collections of molecules, and that this thereby robs humans of moral significance. Can this really be a persuasive objection to atheistic morality? Using the work of Louise Antony, I argue that it can't.

  • Craig and the Argument from Ultimate Accountability - Craig believes that the absence of any ultimate accountability for immoral behaviour is a mark against an atheistic account of morality. Again, with the help of Louise Antony, I suggest that this is not the case.

  • Is there a defensible atheistic account of moral values? - Craig and his co-author JP Moreland have argued that atheism has serious problems accounting for the existence of moral values. Wielenberg counters by arguing that there is a defensible atheistic account of value, and this account is no worse off than Craig and Moreland's preferred account of moral value.

  • Necessary Moral Truths and Theistic Metaethics - Atheists sometimes respond to Craig's moral argument by insisting that some moral truths are necessary and so do not require an ontological grounding/explanation. Craig has responded by arguing that just because something is necessary does not mean that it does not require a grounding/explanation. I wrote an academic paper (published in the journal Sophia) challenging this argument.


3. God and the Meaning of Life

Craig has a pretty disparaging take on the atheistic worldview. Without God there is nothing but despair. We are condemned to live short, finite lives in a meaningless universe. But with God there is hope. He bestows purpose, meaning and significance on our lives. Is this a plausible construal of the relationship between God and meaning in life? I have a written a handful of posts assessing Craig's answer to this question:


  • Craig and Nagel on the Absurd - Back in the days when I did podcasts, I did this one about Craig and Nagel's arguments about the absurdity of life in an atheistic universe.


  • Theism and Meaning in Life - With the help of Gianluco Di Muzio's, this series of posts tries to do two things. First, it tries to clarify the logic of Craig's arguments against meaning in a godless universe; and second it tries to present an alternative, Godless conception of meaning that avoids Craig's criticisms. 


  • God, Immortality and the Futility of Life - Craig claims that two conditions are necessary for meaning in life: (i) immortality and (ii) God's existence. Both are necessary. But why exactly is immortality required? Toby Betenson suggests that it is because if we live forever there is a chance that we will make a causal difference to something of ultimate significance. But this sets up a tension with Craig's theory of ultimate justice. It turns out that if God exists, then we do not make a causal difference to anything of ultimate significance. This post summarises Betenson's argument.













Wednesday, August 19, 2015

The Shape of an Academic Career: Some Reflections on Thaler's Misbehaving




I have long been interested in behavioural science and behavioural economics — heck, I even wrote a masters thesis about it once. I have also long been interested in the nature and purpose of an academic career — which is not that surprising since that’s the career in which I find myself. It was for these two reasons that I found Richard Thaler’s recently-published memoir Misbehaving: The Making of Behavioural Economics to be an enjoyable read. In it, Thaler skillfully blends together an academic memoir — complete with reflections on his friends and colleagues, and the twists and turns of his career — and a primer on behavioural economics itself. The end result is a unique and reader-friendly book.

But I don’t really want to review the book or assess the merits of behavioural economics here. Instead, I want to consider the model of the academic career that is presented in Thaler’s book. This is something that has been bothering me recently. Wont as I am to philosophical musings, I do occasionally find myself waking up in the mornings and wondering what it’s all for. Why do I frantically read and annotate academic papers? Why do I try so desperately to publish an endless stream of peer-reviewed articles? Why do I clamour for attention on various social media sites? I used to think it was just because I have a set of intellectual passions, and I want to pursue them to the hilt. If that means spending the majority of my time reading, writing, and sharing my work, then so be it. As Carl Sagan once said ‘when you’re in love, you want to tell the world’ and if you’re in love with ideas, that’s the form that your expression takes.

More recently, I’ve begun to question this view of my life. To this point, I have pursued my intellectual interests in a more-or-less haphazard fashion. If I’m interested in something, I’ll read about it. And if I’m really interested in it, I’ll write about it. I don’t worry about anything else. I don’t try to pursue any grand research agenda; I don’t try to defend any overarching worldview or ideology; I don’t try to influence public debate or policy. The result is an eclectic, disjointed, and arguably self-interested body of work. Should I be trying to do more? Should I be focused on some specific research agenda? Should I worry about the public impact of my work?

There seem to be at least two reasons for thinking I should. First, in terms of research agenda, it seems that one way to get ahead in academia (i.e. to win research-funding, wider acclaim and promotion) is to be an expert on a narrow range of topics. The eclectic and haphazard approach of my previous work is out of kilter with this ideal. Being a jack of all trades but master of none is a surefire way to academic mediocrity. Second, pursuing intellectual interests for their own sake, can be said to be both irresponsible and selfish. You ought to think about the public impact of your work (if only to save your job in the wake of the recent ‘impact’-fetishism in higher education). You ought to improve the world through what you do. Or so I have been told.

I have some problems with these claims. I don’t enjoy thinking about my work in purely instrumentalist terms. I am not convinced that eclecticism is such a bad thing, or that one should pursue ideological consistency as an end in itself. And while I would certainly like to make the world a better place, I would worry about my lack of competence in this pursuit. Ideas can make the world a much worse place too, as even a cursory glance at history reveals. That said, I do often feel the call of public spiritedness and grubby instrumentalist careerism.

Which brings me back to Thaler’s book….not that he’s a grubby careerist or anything (I don’t know the guy). It’s just that the book, perhaps inadvertently, presents a particular model of an academic career that I found interesting. It’s not the explicit focus of the narrative, but if you zoom out from what he is saying, you see that there are three main stages to his academic career. They weren’t pursued in a strict chronological fashion — there was some overlap and back-and-forth between them — but they are distinguishable nonetheless. And when you isolate them you see how it is possible to build a career from a foundational set of intellectual interests into something with greater public impact.

The three stages were:

Stage I - Pursuing one's Intellectual Curiosities: It would be difficult to be a (research-active) academic without having some modicum of intellectual curiosity. There must be something that piques your interest, that you would like to be able to understand better, or evaluate in more depth. Without a foundation in intellectual curiosity, it would be difficult to sustain the enthusiasm and hard work required to succeed. I choose to believe that anyway, and it certainly seems to be the case for Thaler. As an economics student, he was taught the standard rational utility maximising theory of human behaviour. But then he spotted all these examples of humans behaving in ways that contradicted this theory. He describes all this in Chapter 3 of the book where he talks about the ‘List’. This was something he compiled early in his career, listing of all the anomalies that were starting to bother him. A large part of his research was taken up with trying to confirm and explain these anomalies. And the curiosity didn’t stop with this early list either. Later in the book he gives some illustrations of how his curiosity was always being piqued by seemingly mundane phenomena, such as the formula the University of Chicago business school used for allocating offices in its new building, or the behaviour of contestants on popular game shows. I like to think this sort of ‘curiosity for the mundane’ is valuable, partly because I think curiosity is an end in itself and partly because mundane or trivial phenomena often provide insights into more serious phenomena. Either way, curiosity was the bedrock to Thaler’s career.

Stage II - Influencing one's Academic Peers: In academia there are few enough objective standards of success (outside of mathematics and the hard sciences anyway and even there influencing one’s peers is important). The only true measure of the academic value of one’s research is its acceptance by and influence on one’s academic peers. To some extent, the mere publication of one’s original research in well-respected journals is a way to do achieve this influence, but it is often not enough. After all, very few people read such articles. You often need to take more a more concerted approach. We see evidence of this in Thaler’s life too. His behavioural research presented a challenge to the received wisdom in his field. If he was going to get ahead and have any impact on the world of ideas, he needed to engage his peers: convince them that there was indeed something wrong with the traditional theory and influence future debates about economic theory. Part V of his book is dedicated to how he did this. Three examples stuck out for me. The first was relatively obvious. It concerned a conference he participated in in 1985. The conference was a face off between the traditional economic theorists and the more radical behaviourists. This conference format forced engagement with more sceptical peers. The second was a regular column he managed to secure in a leading economic journal (Journal of Economic Perspectives). The column was entitled ‘Anomalies’ and in it he presented examples of anomalies challenging the mainstream theory. The articles were intended for the economics profession as a whole, not just for research specialists, and most often involved findings from other researchers (i.e. not just from Thaler himself). The column gave him a regular platform from which he could present his views. And third, there was the summer school for graduate students that he helped to create back in 1992. This provided intensive training for future academic economists in the theories and methods of the behaviouralist school. This helped to ensure a lasting influence for his ideas. Building such outlets for academic influence looks like a wise thing to do.

Stage III - Pursuing Broader Societal Impact: The term ‘academic’ is sometimes used in a pejorative sense. People refers to debates as being ‘strictly academic’ when they mean to say ‘of little relevance or importance’. Academics often struggle with this negative view of their work. Some embrace it and defend to the hilt the view that their research need have no broader societal impact; others try to use their work to change public policy and practice. This is something Thaler eventually tried to do with his work (once the solid foundation in basic research had been established). There are several examples of this dotted throughout the book. The most important is probably the book he wrote with his colleague Cass Sunstein called Nudge. This book tried to show how behavioural research could be used to improve outcomes in a number of areas, from tax collection to retirement saving to bathroom hygiene. The book was published with a popular press and found its way into the hands of current British Prime Minister David Cameron (at the time he was leader of the opposition). Impressed by the book, Cameron established the Behavioural Insights Team to help improve the administration of the British government. Thaler was also involved in setting up and advising this team and he still works with them to this day. Now, I’m sure you could challenge some of the work they have done, but what’s interesting to me here is how Thaler managed to successfully leverage his research work into real-world impact.


Just to be clear, I’m not suggesting that all successful academic careers will have these three stages, or that, if they do, they will look just as they did in Thaler’s case. All I’m suggesting is that there is a useful model in these three stages, one that I might think about using for my own career. Thus, I want to make sure that I always maintain a firm foundation in intellectual curiosity, and use that to generate a valuable set of insights/arguments. That’s effectively what I have spent most of my time doing so far and I will continue to do it for as long as I can. But I don’t want to just leave it at that. From this foundation I want to think about ways in which to influence both my academic peers and the broader society. I don’t think my work lends itself to the same kind of practical impact as does Thaler’s, but I think that’s okay. Societal impact can be generated in other ways, e.g. through public education and inspiration, and I suspect that might be more my thing.