notes head



Logical positivism

The challenge and philosophical responses

Metaphysical ethics


Conflicting duties




Ethics itself is concerned with matters of right and wrong. It asks ‘Is it right to do X?’ Meta-ethics, on the other hand, is a ‘second order language’. In other words, it stands back and asks ‘What does it mean to say that something is right or wrong?’

It is a way of looking and the nature and the function of ethical statements, in order to understand what they are doing, and therefore how they may be shown to be true of false. Meta-ethics became a key concern during the middle years of the 20th century, as a response to the challenge of Logical Positivism.

Logical Positivism

The issue of religious language was highlighted by the work of the Logical Positivists, early in the 20th century. Logical Positivism held that, if a statement was neither a matter of logic, nor demonstrably true or false with reference to evidence, then it was meaningless.

In his book Tractatus Logico-Philosophicus, published in 1921, Wittgenstein set out a narrow view of what could count as a meaningful proposition. He saw the function of language as being to picture the world. Therefore every statement needed to correspond to some information about the world itself.

This was developed during the 1920's by the Vienna Circle of philosophers who, inspired by Wittgenstein and by the success of science, wanted to find a way of showing statements to me meaningful and either true or false. Known as the Logical Positivists their work was made popular in the UK by A J Ayerin his book Language Truth and Logic.

They produced a theory of meaning known as the Verification Principle. This argued that the meaning of a statement was its method of verification. In other words, if I say that something is the case, I mean that - if you go and look - you will see it is the case. A statement is thus only meaningful if it can be proved to be true of false through such evidence. Meaning is identical to ‘method of verification’; this is termed the 'strong' form of the Verification Principle.

Ayer held a rather more modest version of this theory (often termed the ‘weak’ form) that a statement can only be meaningful if it is possible to say what evidence would count for or against its truth. i.e. you can’t always get evidence, but you know what it would be like, therefore your claim has meaning, even if eventually it is shown to be wrong. They argued that, if you can't specify what evidence would show a statement to be true, then that statement is meaningless. And this, of course, applies to many statements in religion and ethics.

The challenge and philosophical responses

The problem with ethical language is that the statements of ethics are not simply matters of fact. You cannot get an ‘ought’ from an ‘is’. Just because something is the case, it does not mean that it ‘should’ be the case. But if it is not simply factual, how can it avoid the challenge of the Verification Principle? In response to this, there developed a number of different ways of explaining what ethical language was about, with the intention of showing that it was indeed meaningful. Each of these can be thought of as a 'meta-ethical' theory.

This challenge dominated ethics from the 1930s until the 1960s, and we shall examine a number of attempts that were made to find a meaning for ethical statements that would not be dismissed by Ayer’s argument. But in order to appreciate the impact of positivism, it is useful to look at two approaches to ethics that preceded it: metaphysical ethics and intuitionism.

Metaphysical ethics

‘Metaphysical ethics’, wanted to show that morality could be related to an overall view of the world and the place of humankind within it. F H Bradley, in Ethical Studies (1876) argued that the supreme good for humankind was self-realisation. In other words, we act in a way that is morally good when we do those things that allow us to develop ourselves as part of a wider community. Morality is therefore not just about particular actions, but about the character of the people that perform them, and the understanding they have of their part in the wider world.

Now, metaphysical ethics of this sort depends on two abstract ideas: the world as a whole, and self-realisation. Neither of these can be reduced to the sort of evidence that the logical positivists were later to claim as necessary for meaning. Thus, they would have seen metaphysical ethics as meaningless.


G E Moore argued in Principia Ethica (1903) that the primary term ‘good’ could not be defined. Fundamental moral principles are therefore known by intuition. They cannot be proved to be true or false, but are recognised as soon as they are thought about. Thus we know what it means to say that something is ‘good’, even to say that many different things are good, although we cannot point to any particular quality that makes it so. The analogy Moore used was with colour. We know what ‘yellow’ is, and can recognise it wherever it is seen, but we cannot actually define yellow. In the same way, we know what ‘good’ means, but cannot define it.

He claimed that most earlier ethical theories had fallen into the ‘naturalistic fallacy’ of trying to derive an ‘ought’ from an ‘is’. You may know what ‘good’ is, but you cannot define it.

Moore's theory is not simply to be identified with intuitionism, for his is a theory about the primacy of the term ‘good’. He claimed that we could not define 'good' in terms of anything else, even though we know what it is, can point to it, and base all of our moral arguments upon it. That is not the same thing as saying that absolutely all our moral claims are based on intuition alone.

Now, in contrast to metaphysical ethics, this approach does not depend on any abstract concepts about the world as a whole. On the other hand, ‘good’ is not simply a word we choose to apply to objects, but is the name of a quality that inheres in things. He thought of good as something rather like ‘beautiful’ - a quality that could be found in things but not described. This approach came to be known as intuitionism, although that was not a term that Moore himself used for it. Moore believed that the task of morality was to maximise the ‘good’.

In a further development of this approach, H A Prichard (1871-1947) argued that you could not reduce moral obligation to anything else. Like Moore’s ‘good’ it was something known directly by intuition (his work on this, Moral Obligation was published in 1949).

Notice what is implied by the intuitionist approach: you cannot use any factual evidence to show that something is good or that one has a moral obligation. All basic moral judgements are self-evident.

Conflicting duties?

Another Oxford philosopher influenced by Moore, W D Ross (1877-1971), argued in The Right and the Good (1930) and The Foundations of Ethics (1939) that Moore was right to deny that you could equate goodness with any natural property, but that he was wrong in arguing that the only criterion for moral obligation was to maximise the good. Rather, he pointed out that one may have a conflict of duties, and it may not be at all obvious which is to take priority. My duty is therefore self-evident (known through intuition) provided that it does not conflict with another self-evident duty.

That is why we have moral dilemmas – if you did not have a conflict of duties, a conflict in what you see as ‘good’ then everything would be straightforward. Snag is that, in real life situations, Ross was right, there are always conflicting duties.
Take the discussion of Euthanasia as an example: I may believe intuitively that I have a duty to uphold the value of life and to promote life rather than death; but, at the same time, I believe that I have a duty to be kind and relieve suffering in any way I can. The conflict of interest is that I may believe that, in order to relieve suffering, I should help someone to end their life, and hence that I appear to be denying the absolute value of life. Moral dilemmas are based on such conflicts.


It will now be clear why the logical positivist position was so threatening to ethics. If meaning is only given with respect to the evidence provided by the senses, the metaphysical ethics is meaningless, since it is based on abstract concepts that do not have a ‘cash value’ in terms of experience. But the attempt to escape from that charge and claim that morality is known through intuition is equally threatened. For if goodness and obligation cannot be ‘reduced’ to evidence of any sort, then - as far as the logical positivists were concerned - they too were meaningless.

The positivists hoped to put language and meaning on the same sure basis as the physical sciences. Everything had to be tested out in terms of evidence: no evidence, no meaning.
The key feature here is the naturalistic fallacy. If we can never argue from an ‘is’ to an ‘ought’, then any approach to language which tries to base meaning on evidence must automatically rule out the possibility of meaningful ethics.

But the positivist claim went further. Wittgenstein (and others) argued that we can have no knowledge of private mental states. They argued that to describe someone as angry, for example, did not imply that one had special access to a mental state. Rather, the word ‘angry’ describes someone who is red in the face, shouting, waving a fist in the air, and so on. Anger ‘means’ all that, because that is the only way in which I can specify why I used that word to describe that person. To take another example: an itch, on this theory, is merely a disposition to scratch. Wanting to scratch is what we call ‘having an itch’. There is no itch independent of the disposition to scratch.

So how do you start to get the sort of evidence for the meaning of a normative moral statement (i.e. that it is right or wrong to do something) that would satisfy a logical positivism? Notice here that we are only dealing with normative claims, a descriptive or hypothetical statement is different – ‘If you do that, you are likely to get caught’ can be shown true or false with reference to evidence, it is simply a description based on observation. Equally, 'If you do not feed her, she will starv. is simply descriptive and therefore presents no problem. But to say 'You should feed her.' is not descriptiv, but normative. ‘You are wrong to do that’ is not a statement that can be directly backed by evidence.


The criticism of moral statements by the logical positivists was based on the assumption that such statements were making factual claims. A J Ayer argued for a theory about the nature of ethical statements that became known as emotivism.

An emotivist view gets round the logical positivist rules about what is meaningful, by claiming that moral statements are not factual, but express the feelings of the person who makes them. If you like something then you call it ‘good’, if you dislike it, ‘bad’. Thus two people can consider exactly the same facts and come to quite different moral conclusions. One cannot say that one is right or the other is wrong, because there are no facts that separate them, one can only accept that each is using moral judgements to express his or her emotional response to that set of facts.

This approach was taken by C L Stevenson in his Ethics and Language (1944). He was particularly concerned about how moral statements are used, and what results they are intended to produce. He claimed that the word ‘good’ was a persuasive definition; it was there to express your emotions. On the other hand, if you tried to go on from there to give some reason why you felt that way, that is more than emotivism will allow.

One key question to ask in considering this theory is: How do emotions expressed in ‘moral’ statements differ (if at all) from other emotions? Otherwise, moral statements are simply a listing of how we feel, and that does not seem to do justice to the way in which moral statements are actually used. I may sense that, when I say of something that it is right or good, I am doing more than simply describing my emotions at the time. What more am I doing when I make moral statements? Let us move to consider a second theory.


Another approach to the same problem is to say that to make a moral statement is to prescribe a particular course of action. This approach was taken by R M Hare (The Language of Morals (1952) and Freedom and Reason (1963)). He argued that a moral statement is ‘prescribing’ a course of action, recommending that something should be done, not just expressing a feeling. On the other hand, moral statements are rather more than commands. A command is simply a request to do a particular thing at a particular moment, whereas a moral statement is making a more general suggestion about what action should be taken. In other words, a moral statement is both prescriptive and also universalisable: suggesting what everyone should do in the circumstances. Hare believed that, in this way, it was possible to apply reason and logic to matters of value.

‘If I ought to do this, then somebody else ought to do it to me in precisely similar circumstances.’ So I have to ask myself: ‘Am I prepared to prescribe that somebody else should do it to me in like circumstances?’
(R M Hare in Bryan Magee Men of Ideas, 1978)

Prescriptivism suggests that in responding to moral statements, we do not to acknowledge that they are either true or false, but simply accept or reject the actions they are prescribing. Thus you may say to me ‘It is right to feed those who are starving.’ If I agree with that statement, what I am actually saying is ‘Yes, and that is what I intend to do.’

All of this debate has come, of course, from the basic argument that you cannot derive an ‘ought’ from an ‘is’. If (like the logical positivists) you believe that a statement only means something if you can point to evidence for it, where do you find your evidence for moral statements? Either in the area of human emotions expressed through them, or in the courses of action that such statements might prescribe - the first leads to Emotivism and the second to Prescriptivism.

Both theories avoid the claim that moral statements are meaningless, by pointing to the evidence of what actually happens when moral statements are made - for, whether or not they are meaningless in themselves, it is clear that moral statements do actually express emotions and recommend courses of action. In this, ethics moved in parallel with much linguistic philosophy in the first half of the 20th century - away from a narrowly defined sense of meaning toward an appreciation of language that can be used in many different ways.

It is possible to argue that moral statements are means by which we overcome selfish perspectives. John Mackie (1917-1981), in Ethics: Inventing Right and Wrong (1977), argued that:
• there are no objective moral values
• therefore all moral claims are objectively ‘false’
• but we can continue to use moral language if it helps us to overcome narrow views and sympathies.
In other words, Mackie was building on the work of emotivism and prescriptivism by saying that morality has a function but not an objective basis.

On the other hand, there is a fundamental question to ask of this approach: Why not enjoy having limited sympathies? Why bother with moral codes at all? Why should it make any difference if our views are completely selfish or universally benevolent? If we are ‘inventing’ right and wrong, why are we doing it? What does humankind have to gain from having developed a sense of conscience? It would seem that at some point ethics needs to be based on something other than itself. Morality remains a phenomenon which needs some explanation.

©  Mel Thompson

Return to 'Notes' page

Return to Home page




A fascinating story of the impact of war on religious ideas.

From only £1.99 / $2.79!


Amazingly, Wittgenstein was hugely influential twice, offering very different views of language.

His early work, Tractatus, saw language as picturing reality, inspired by the precision of logic and science. It was a fundamental text for the development of Logical Positivism.

Later, his Philosophical Investigations, would introduce a very different view of how we use and understand language - looking at its function as a 'form of life' or a set of rules like those of a 'game'.

In his final years, he was working on a different set of issues, concerning 'certainty' and what we call 'foundationalism' - in other words, the quest for what can be certain enough to act as a foundation for all the rest of our knowledge. Had he lived longer, he would no doubt have shaken up the philosophical world a third time.



If you find these notes useful, tweet your followers...



Then the world changed...

Much of the discussion of ethics until about the 1960s was concerned with the attempt to find some meaning for ethical language. It felt a bit like a dead end. Philosophers did not claim to be able to say that anything was good or bad, only what it meant to say that something was good or bad.

Then, in connection with medicine, warfare, environment etc, there arose a huge number of basic issues that needed to be addressed. People wanted answers to moral dilemmas, or at least guidelines to show where answers might be found. Ethical committees were set up to examine moral issues in medicine, and the war in Vietnam prompted many people to ask about the morality of warfare. Meta-ethics seemed remote from these practical concerns.

Philosophy (especially led by utilitarianism) started to get up and make moral claims again. The world had moved on, a philosopher might say that a moral statement is meaningless - but that does not solve a real ethical dilemma.

Interested? Click any cover for more information...



Ethical theory

Phil for life



Beach book


Religion and science

Ethics for life