Alsuren

April 4, 2009

Notes on Kant

Filed under: Uncategorized — alsuren @ 12:55 am

These are a few notes from reading James W. Ellington’s translation of Kant’s “Grounding for the Metaphysics of Morals”. It’s mostly from stuff I wrote in my copy of the book, so if you want me to post page numbers/context for anything, shout. Most “quotes” are paraphrased either to represent my take on them, or for comic effect.

Intro (by Ellington): “This book is meant to be an introduction to Kant’s ideas. I will now proceed to run over them all in a summary as if you’ve already read his entire collected works.”… in an intro to an intro. Nice work.

Section 1: “From the Ordinary Knowledge of Morality to the Philosophical”

“A will has ‘moral worth’ due its motivations, rather than its actions.” (obviously, it is difficult to analyse this kind of thing externally, so you could probably infer from this a “judge not”-like statement.)

“There is a distinction between instinct and reason.” (which I’m not sure I agree with).

“Any action not explained by anything like self-interest will be explained be ‘duty’ and therefore have ‘moral value’.” (all other actions seem to be considered neutral)

At the end of this section, I have a few notes on the Categorical Imperative.
“You should always be able to desire that your policy should become the policy used by all.” (Kant’s formulation refers to a ‘maxim’ guiding an action, whereas I prefer to think of ‘policy’ guiding actions/choices. )

So when evaluating policy, I would say at this point that there are two approaches:
a) Pick the policy which gives you the highest expected reward if you follow it and everyone else acts normally (This will be morally neutral according to Kant).
b) Pick the policy which gives you the highest expected reward if everyone follows it (This will have ‘moral value’ according to Kant).
Kant seems to be wary of condemning any action to having negative moral value, but I’m not, so I’m going to say “Any policy which gives you really obviously poor expected reward in both of these cases is immoral.”

Section 2 “Transition from Popular Moral Philosophy to a Metaphysics of Morals”

“If it is useful, it can’t be said to have moral worth.”

There are a few examples of moral decision problem, which are evaluated under the categorical imperative. The charity example is the most interesting, as I suspect that you could probably make some extra assumptions so that giving to the poor becomes immoral under my formulation.

Also, it seems to be suggested that “I am not just a means to an end; I am moral so I am an end in myself” + categorical imperative => “I must not treat him as if he is just a means to an end” which I don’t think follows. Luckily, it’s possible to read the rest of the book without agreeing with this conclusion.

He then outlines the concept of a “kingdom of ends”, as in a community of ‘moral’ citizens (ones who follow the Categorical Imperative) The idea is that there is no need to have externally enforced laws, as each citizen legislates for himself by applying the categorical imperative to all of his decisions.
I think it’s an interesting thought experiment, and if anyone fancies running a simulation comparing the evolution of a kingdom of ends against a kingdom of nature (morally neutral, under my formulation) give me a shout.

He says that in the kingdom of ends, everyone acts as a “supreme legislator” (I agree) but he then says that they can’t be motivated by self-interest (I don’t agree: I think that being moral under *my* formulation provides a great simplifying assumption, so a self-interested party without infinite time for logical reasoning might expect greater rewards more quickly by acting morally)

He then goes on to introduce a concept of “reltive values” (“market price” for skills etc, and “affective price” for humour etc) and says that they are completely different from an “intrinsic worth” for morality.
My objection to this is that under his formulation, for a maxim to have moral worth, “it should be desired that it become the universal maxim”. I can only assume that it must be “desired” for a reason, namely that it would increase the availability of the aforementioned relative values. Therefore, “intrinsic” moral worth is still surely dependant on these relative values. I think you can either conclude this, or conclude that someone could desire the end of the world, and therefore be completely moral for going on a murderous rampage.

Somewhere towards the end of the second section, Kant starts punching holes in his own concepts, as I’d been waiting for him to do for half the book already. He mentions that there’s no way to construct a true kingdom of ends, as there is no incentive for people to behave morally.

He also refers to his universal impirtive as “synthetic”, which is philosopher for “I made this shit up. Maybe I’ll justify it some other time”

There is at some point a comparison between the categorical imperative and “do as you would be done by”. It’s essentially a less strict condition for morality, which removes the convicted man’s cry of “You wouldn’t want to be sent to prison. This is immoral.” I quite like that. Shame it’s a bit too woolly, because he’s trying to apply it to every possible situation. It’s also a shame that he he adds loads of questionable assumptions without justification in order to get the formulations of “Treat others as ends in themselves” and “Rational beings must legislate universal law”.

Section 3: Transition from Metaphysics of Morals to Critique of Practical Reason.

I don’t have so many notes in this section as I read it a bit more quickly. Quite a lot of it is just more picking apart and limitations of the Categorical Imperative, which I had already concluded was pretty useless for encouraging moral behaviour in *anyone* in Kant’s form, but might provide the groundwork for a nice simplifying assumption at least.

Also included was “On a right to lie because of Philanthropic concerns”, which I have just realised I haven’t read. Maybe I’ll edit this some other time to add my views on that.

Advertisements

14 Comments »

  1. Disclaimer: this was written far too early in the morning, and I haven’t read Groundwork in quite a while (at least a year, I suspect)

    First up, I think it’s useful to have a summary of all three formulations of the Categorial Imperative (CI):
    1) A maxim you create for yourself should be universalizable, so that it could logically apply to all
    2) Treat each person as an end, not a means
    3) Each person should act as if they were in the Kingdom of Ends (KE)

    “Any action not explained by anything like self-interest will be explained be ‘duty’ and therefore have ‘moral value’.”
    If you have a binary between self-interest and duty, this seems to leave no room for kindness, or acting purely for the good of others. Kant seems to think most people would be incapable of acting out of pure goodness, because they want to be good, or want to be seen as good

    “You should always be able to desire that your policy should become the policy used by all.”
    At least from a lawyer’s perspective, maxims and policies are not the same thing. A maxim is a general principle to be applied to rules, whereas policy is something like soft law – it should be followed, but it is lower in a hierarchy than rules, and reasons can be given for not following it.

    “So when evaluating policy, I would say at this point that there are two approaches:
    a) Pick the policy which gives you the highest expected reward if you follow it and everyone else acts normally (This will be morally neutral according to Kant).
    b) Pick the policy which gives you the highest expected reward if everyone follows it (This will have ‘moral value’ according to Kant).”
    These are essentially rule-utilitarianism – decising which act to do takes too much time, and the outcomes are too unpredictable, so you come up with a basic policy instead.
    The problem here is how to know whether anyone else will follow the same policy.

    “If it is useful, it can’t be said to have moral worth.”
    This links back to my first comment – if you deem something to be useful, you’ve considered whether it’s good for you, so it no longer has moral worth.

    “He then outlines the concept of a “kingdom of ends”, as in a community of ‘moral’ citizens (ones who follow the Categorical Imperative) The idea is that there is no need to have externally enforced laws, as each citizen legislates for himself by applying the categorical imperative to all of his decisions.
    I think it’s an interesting thought experiment, and if anyone fancies running a simulation comparing the evolution of a kingdom of ends against a kingdom of nature (morally neutral, under my formulation) give me a shout.”
    It’s interesting that you see the kingdom of nature as morally neutral. A few different philosophers have written about the State of Nature (mostly those who subscribe to social contract theory), and have different views on it. Hobbes thought it was morally neutral,and generally unpleasant, but Locke was a little kinder.

    “Quite a lot of it is just more picking apart and limitations of the Categorical Imperative, which I had already concluded was pretty useless for encouraging moral behaviour in *anyone* in Kant’s form, but might provide the groundwork for a nice simplifying assumption at least.”
    Under Kant’s regime, it’s pretty much impossible for anyone to act entirely morally. The basic premise is that if you consider what you are doing to be good/moral, that’s self interest. The CI is about working towards the Kingdom of Ends, not actually meeting it.”

    Once I’ve read it again, I’ll probably write something myself.

    Comment by yoyomarules — April 4, 2009 @ 9:21 am

  2. > I haven’t read Groundwork in quite a while
    Yeah, so when you say that, I’m reminded of german techno music [/childish]

    “””First up, I think it’s useful to have a summary of all three formulations of the Categorial Imperative (CI):”””
    Yeah, useful. The reason I didn’t include it is because I don’t think that the other two *really* follow from the first. Where I talk of the categorical imperative, I mean the first formulation.

    “””Kant seems to think most people would be incapable of acting out of pure goodness, because they want to be good, or want to be seen as good”””
    He actually gives an example of someone who finds “inner pleasure in spreading joy around them”[A.K 398 or page 11] but he attributes the actions of this person to their inclinations. He’s trying to create a formulation of morals (a priori) based upon reason alone, so it would be inappropriate to consider the morality of actions based on inclinations.

    “””At least from a lawyer’s perspective, maxims and policies are not the same thing. “””
    In reinforcement learning, a policy is simply a way of getting from the state of the world to an action. For example “You are playing rock-paper-scissors, therefore pick rock with probability 1/3, paper with probability 1/3 and scissors with probability 1/3” I’m not sure whether this could also be called a maxim, so I use the word policy. The set of maxims appears to be a subset of the set of policies, as defined in the reinforcement learning literature.

    “So when evaluating policy, I would say at this point that there are two approaches:
    a) Pick the policy which gives you the highest expected reward if you follow it and everyone else acts normally (This will be morally neutral according to Kant).
    b) Pick the policy which gives you the highest expected reward if everyone follows it (This will have ‘moral value’ according to Kant).”

    “””These are essentially rule-utilitarianism”””
    To me, utilitarianism implies a constraint on the reward function which is not present in optimal control/decision theory (namely that you care about other people). The first is a formulation of optimal control, the second is Kant’s first formulation of the categorical imperative applied to optimal control. There is no requirement to think about the well-being of others.

    “””The problem here is how to know whether anyone else will follow the same policy.”””
    There is also the problem (in the first case) that you don’t know what everyone else’s policy will be, so it would need to be inferred somehow. By assuming that everyone else follows your policy, you make a huge simplifying approximation. Whether it is valid or not remains to be seen.

    “””It’s interesting that you see the kingdom of nature as morally neutral.”””
    For the purposes of the exercise, I use Kant’s moral compass. It seems that Kant’s moral compass is incapable of pointing south though, only north or neutral.

    “””Under Kant’s regime, it’s pretty much impossible for anyone to act entirely morally. The basic premise is that if you consider what you are doing to be good/moral, that’s self interest.”””
    Yes, but he doesn’t condemn self-interest, he merely says that reason doesn’t assign it any intrinsic/a priori/moral worth. I think you have to look into relative worth to decide what the optimal policy is once morality is ruled out.

    Comment by alsuren — April 4, 2009 @ 3:37 pm

  3. (Sorry it’s taken me so long to get around to this response. Inevitably, it is now going to end up being absurdly length and full of much wandering.)

    My first impression is that it seems like interesting stuff, though I can’t deny a lot of it strikes me as nonsense. (This is possibly because I haven’t read anything by Kant or any true philosophy for that matter, though possibly just because it *does* contain some amount of nonsense.) Indeed, I’ve gathered all of my knowledge about Kant’s ideas from your posts and the previous couple of comments, but that shouldn’t matter terribly much for the sake of my post. Here’s my take on things anyway…

    “A will has ‘moral worth’ due its motivations, rather than its actions.” – I don’t think I even agree with this first statement. It would seem to imply that the consequences of desires are of no regard, that the failure to recognise one’s will to do what the majority of society consider harmful, and that only that the individual who believes these intentions would be moral is convinced of their own rightness. Could such a definition not apply to evil men that are self-deluded? Essentially, I am trying to say that this is a local definition, rather a global one, which I consider to be necessary, given that neither motiviations nor actions can be considered in isolation.

    “There is a distinction between instinct and reason.” – I can’t properly comment on this without understanding what it’s trying to get at, and I very much don’t. If I were to guess, Kant is simply differentiating between more primitive motivations for incetives for actions and fully cognitive (analytical?) thinking.

    “Any action not explained by anything like self-interest will be explained be ‘duty’ and therefore have ‘moral value’.” – I’m pretty much with Holly on this one. Also, it appears to verge on a circular argument. The question now is: how would you define “duty”? To me, duty is itself defined by ingrained moral values, though I’m not going to launch into a deep discussion about that here.

    The Categorial Imperative – I’m marginally hesitant to criticise an idea such as this when it seems to be so esteemed; nonetheless I think I still understand its essence, so I’m going to do it anyway. I would object principally on a similar basis to that of the “moral worth” of will. In other words, not having an awareness of other people’s desires/opinions. In my mind, maxims should strictly be only part of the decision-making process when considering the morality of actions. Your utalitarian view is quite interesting however, and I too would not be very hesitant in labelling motives with poor expected reward in either of the two cases as immoral. Change “gives you the highest expected reward” to “gives you and the community the highest expected reward” and you’ve got a full agreement. (See a later paragraph for my explanation of this.)

    “If it is useful, it can’t be said to have moral worth.” – Too vague for me to formulate any useful opinion. Again, possibly due to the fact that you are summarising (probably quite *well*, but Kant clearly took much longer to explain, and I tend to need concrete definitions/examples in many cases).

    Regarding the “kingdom of ends”, I think I agree with his reference to it being “synthetic” more than anything. Above all, it sounds like something for which I would need to read his work to fully grasp, so I won’t read too much into it for now. There are quite likely some very sensible (and dubious) ideas buried in this concept of a “kingdom of ends” (at least given the assumption of the Categorical Imperative), but I would strongly contend the point that “there is no incentive for people to behave morally”. Rather, I would propose that by the very nature of morals, people have incentives to behave morally – largely inherent and purposeful ones – but when you throw in competing urges and a degree of self-interest, they often tend to be overruled (depending on the situation).

    That’s enough of debating Kant’s views. From here onwards is essentially just my own opinions on the matter. Personally, I would start by taking a broader look at the reasons for moral behaviour and how moral societies operate. I would in fact argue that you can only properly consider things from the stand point of evolution and natural processes. This would of course involve theories that Kant clearly did not have available in the 18th century, though I’m not sure how much it would have limited him.

    Firstly, I would propose that by natural selection, the genes for individuals who are not inclined towards self-interest tend to be removed from the gene pool. This is an obvious statement, since other individuals with greater self-interest (at least to a degree) would easily outcompete the others. Conversely, those genes promoting *too much* self-interest would tend to be removed too. Indeed, those persons who consistently put their personal interests before that of the society are most often recognised as detrimental to the success of the community among other members. Likewise, those genes that cause behaviour which is typically “bad” for the community, are similarly eliminated over time, whereas the inclination to benefit society is maintained by natural selection. Putting everything together, there would appear to be a requirement that one’s policies are governed by both self and communal interest. A system in which the interest of others is considered is self-reincforcing and therefore tends to be more successful. This is perhaps the clearest reasons as to why morals exist. Natural variation among individuals should therefore explain a natural variation in moral values – i.e. the balanced is tipped one way or the other. Of course, the reality is a lot more complex, and not single dimensional as this suggestions – if you augment this theory with the fact that perceptions of utility (to oneself and the community) can vary hugely, I think it starts to explain a lot.

    The ethic of reciprocity (“do unto others as you would have them do unto you”) is another interesting point. Since I’ve actually read a few things relating to this, I think it’s fair to say that there is a significant amount of evidence supporting the usefulness (from a utilitarian perspective) of the moral. People have attempted to explain the ethic using game theory and psychology among other arguments, which actually seem to work quite well. There’s no point of me going much further into this, as I’m realising as I write this that many of my views have been studied within the field of “evolutionary psychology”. The only remaining thing to say is that as intelligent beings, evolutionary psychology can’t explain *entirely* our moral (or immoral) motivations and behaviour – there’s some degree of competition between instinct and high-level reasoning, which comes back to my earlier point.

    Right, that’s the end of my (likely very patchy) theories on the subject. Perhaps much of my disagreement comes from analysing Kant’s philosophies purely from your summaries and therefore missing some of the subtleties – not that it would change any of my actual opinions. Unless I’m grossly misinterpreting his ideas, I think my main criticism would still be that his tendency to consider morals and behaviour from a local stand-point (as if individuals were fully independant and free from the effects of society on themselves) is a fault.

    So this has pretty much turned into an essay, which I was really hoping it wouldn’t. Let me know what you think anyway.

    Note: This was largely written late at night, so excuse any typos (I can’t be bothered to check for them now), as well as any slightly incomprehensible sentences.

    Comment by Alex — April 9, 2009 @ 1:21 pm

  4. When reading your comments, and putting them into the context of Kant’s ideas on moral philosophy, I think it’s important to keep in mind that Kant tries to use logic and reason rather than anything empirical or scientific to build his grounding. It’s an exercise in “How few assumptions/restrictions can we make and still have something that’s workable”?

    One thing that I maybe should have commented on is the idea of prudence, which I would put under expected reward, but is all about long term payback (Holly will probably correct me on my definition of this too). A lot of the “being seen to be trustworthy” stuff is just about being prudent. Essentially still self-interested behaviour, but with a long return on investment period.

    In the evolutionary sense, your reward function would also include the continuation of your genes, or genes similar to yours, so helping the community is still self-interest.

    When I remove the explicit requirement to think about the well-being of your community, I do it because I don’t think you *need* it: if you’re just self-interested then helping your neighbour might emerge from prudence or genetics (but not necessarily). If you follow [my formulation of] the categorical imperative, then helping your neighbour should emerge from the “do as you wish others to do” element of it.

    What I’m trying to say is “Yes: everything you say is valid, but you don’t *need* all of that for a system of morals, and for a suitably general system of morals, you *can’t*.”

    I seem to remember reading something in the bittorrent protocol documentation that essentially says “Do what you want, but make sure that your policy works both in the status quo and if your policy became the status quo.” (but I can’t find it now). This is an example of a community of (non-human) rational beings mostly obeying the categorical imperative (exception being super-seed mode, which isn’t very common) and surviving pretty well.

    Comment by alsuren — April 10, 2009 @ 12:44 am

  5. “To me, utilitarianism implies a constraint on the reward function which is not present in optimal control/decision theory (namely that you care about other people). The first is a formulation of optimal control, the second is Kant’s first formulation of the categorical imperative applied to optimal control. There is no requirement to think about the well-being of others.”

    Utilitarianism (normally) means the consideration of what is good for the population as a whole, so the well-being of others is automatically included. However, as (presumably) you are also a member of the population, there is a degree of self-interest. Unless you take ‘well-being’ as something else, such as including individual rights trumping the good of the population.

    “Could such a definition not apply to evil men that are self-deluded? Essentially, I am trying to say that this is a local definition, rather a global one, which I consider to be necessary, given that neither motiviations nor actions can be considered in isolation.”
    For Kant, reason is generally objective; if an individual is self-deluded, they wouldn’t be capable for what he sees as reason.

    ““There is a distinction between instinct and reason.” – I can’t properly comment on this without understanding what it’s trying to get at, and I very much don’t. If I were to guess, Kant is simply differentiating between more primitive motivations for incetives for actions and fully cognitive (analytical?) thinking.”
    That’s essentially it. In a nutshell, Kant thought that everyone had their first instincts or desires, but should use reason and logic to ignore these and do what was best. Earlier philosophers (such as Hobbes) had though freedom meant the ability to act on these first instincts, whereas Kant spoke of a ‘higher freedom’ in being able to resist these instincts and act on reason instead.

    “Change “gives you the highest expected reward” to “gives you and the community the highest expected reward” and you’ve got a full agreement. (See a later paragraph for my explanation of this.)”
    Yeah, this is essentially utilitarianism in the pure, Millsian form.

    “Putting everything together, there would appear to be a requirement that one’s policies are governed by both self and communal interest.”
    So evolution is essentially looking for a happy medium between looking out for yourself and selflessness (presumably not pure selflessness, as that could lead to suicide)?

    “A system in which the interest of others is considered is self-reincforcing and therefore tends to be more successful.”
    I’ve been reading more J.S. Mill today, and he had a theory that just trying to preserve the community, or at least do what the community wants, is how it stagnates. A community tends to gravitate towards its middle ground; it takes a few eccentrics/geniuses to add in a spark of creativity and get society thinking again about what is actually good for it.

    “The ethic of reciprocity (”do unto others as you would have them do unto you”) is another interesting point. Since I’ve actually read a few things relating to this, I think it’s fair to say that there is a significant amount of evidence supporting the usefulness (from a utilitarian perspective) of the moral.”
    ‘Do unto others’ is essentially the Categorical Imperative (in all its formulations) in a nutshell. Most of the discussion of Kant is based around the concepts of ‘higher freedom’ and the subtle differences between the three formulations (like David said, they don’t mean exactly the same thing)

    “When I remove the explicit requirement to think about the well-being of your community, I do it because I don’t think you *need* it: if you’re just self-interested then helping your neighbour might emerge from prudence or genetics (but not necessarily). If you follow [my formulation of] the categorical imperative, then helping your neighbour should emerge from the “do as you wish others to do” element of it.”
    Helping your neighbour may also come out of self-interest. Durkheim and Marx pointed out that, since in a modern society labour is divided between many people, you need your neighbours to survive.

    Basically, I think you both need to read J.S. Mill (‘On Liberty’ and ‘Utilitarianism’), but they’re probably ones for after exams.

    Comment by yoyomarules — April 10, 2009 @ 11:00 pm

  6. @alsuren:

    ‘When reading your comments, and putting them into the context of Kant’s ideas on moral philosophy, I

    think it’s important to keep in mind that Kant tries to use logic and reason rather than anything

    empirical or scientific to build his grounding. It’s an exercise in “How few assumptions/restrictions can

    we make and still have something that’s workable”?’

    This was what I was suspecting you might say. Still, what I consider my approach to be is more of a

    theoretical scientific one. I don’t think you actually need experimental science to support what I’m

    proposing any more than what Kant is. Purely using “logic and reason” to explain morals is something

    about which I am slightly doubtful. Are not scientific “thought experiments” just as good? Trying to

    purely use logic (at least how I interpret you to mean that) would seem to be a doomed attempt (humans

    are far from wholly rational beings, for a start, and additionally, how would you define perfect

    reasoning?). Perhaps he is trying to analyse everything from a persepctive of perfect logic, which I

    would accept rather more easily. Nonetheless, from what I’ve read of his ideas there’s no flow of

    mathematical logic to support them. Also, I think I see what you mean regarding a suitably general system

    of morals, but would counter that by saying I don’t believe such a thing is possible – in fact, I would

    tend to classify certain morals as more general than others, while some are very specific to the human

    race and environment and may not apply to other intelligent species for example. Saying all this, I

    suppose I’m still missing the point here from your point of view?

    ‘When I remove the explicit requirement to think about the well-being of your community, I do it because

    I don’t think you *need* it: if you’re just self-interested then helping your neighbour might emerge from

    prudence or genetics (but not necessarily). If you follow [my formulation of] the categorical imperative,

    then helping your neighbour should emerge from the “do as you wish others to do” element of it.’

    I considered this too but I didn’t like it very much. Certainly, there’s an aspect of “If I help others,

    they’ll help me in return.”. Take for example the case where an person of average socioeconomic standing

    gives to charity (perhaps anonymously). Can you truthfully find any way in which this particular person

    is rewarded? What is more, such situations are far too common to dismiss as odd exceptions within

    humanity (such as saints). The less obvious case is of course that of billionaires pursuing philanphropy

    – one could argue that it was done entirely for the purpose of attention or gaining a more positive

    legacy after death. Nonetheless, I maintain that there is a strong element of wanting to help the

    community pervading morals (even ignoring reciprocation).

    The example of BitTorrent you give seems interesting though a bit dubious. Do people truly follow this

    maxim(?) in general? I *am* becoming slightly more inclined towards the usefulness of the categorical

    imperative, although it still doesn’t come close to explaining the basis of morals in my opinion, at

    least not without other rules.

    @yoyomarules:

    ‘“Could such a definition not apply to evil men that are self-deluded? Essentially, I am trying to say

    that this is a local definition, rather a global one, which I consider to be necessary, given that

    neither motiviations nor actions can be considered in isolation.”
    For Kant, reason is generally objective; if an individual is self-deluded, they wouldn’t be capable for

    what he sees as reason’

    That’s a fair point (at least to a certain level). If Kant wants to reason solely from the perspective of

    having a society of perfectly logical beings (I’ve already stated my dissatisfaction with this), then I

    can’t really object.

    ‘“Change “gives you the highest expected reward” to “gives you and the community the highest expected

    reward” and you’ve got a full agreement. (See a later paragraph for my explanation of this.)”
    Yeah, this is essentially utilitarianism in the pure, Millsian form.’

    Quite interesting that I seem to be playing the role of utilitarianist here. Not sure if you read David’s

    blog post about machine consciousness (I wrote my own too, given it was based on an agreement to sketch

    out a debate we had some months ago), but I was tending to reject the principles of utilitarianism there

    while he was the ardent proponent.

    ‘“Putting everything together, there would appear to be a requirement that one’s policies are governed by

    both self and communal interest.”
    So evolution is essentially looking for a happy medium between looking out for yourself and selflessness

    (presumably not pure selflessness, as that could lead to suicide)?’

    Exactly. This would also implify that morals (to a varying degree?) aren’t static over time, which kind

    of relates to the point I made earlier.

    ‘“The ethic of reciprocity (”do unto others as you would have them do unto you”) is another interesting

    point. Since I’ve actually read a few things relating to this, I think it’s fair to say that there is a

    significant amount of evidence supporting the usefulness (from a utilitarian perspective) of the moral.”
    ‘Do unto others’ is essentially the Categorical Imperative (in all its formulations) in a nutshell. Most

    of the discussion of Kant is based around the concepts of ‘higher freedom’ and the subtle differences

    between the three formulations (like David said, they don’t mean exactly the same thing).’

    The ethic of reciprocity is equivalent to the categorical imperative? Give me some time to think about

    that a bit more. 😛

    ‘“When I remove the explicit requirement to think about the well-being of your community, I do it because

    I don’t think you *need* it: if you’re just self-interested then helping your neighbour might emerge from

    prudence or genetics (but not necessarily). If you follow [my formulation of] the categorical imperative,

    then helping your neighbour should emerge from the “do as you wish others to do” element of it.”
    Helping your neighbour may also come out of self-interest. Durkheim and Marx pointed out that, since in a

    modern society labour is divided between many people, you need your neighbours to survive.’

    Agreed, but also see the point made in the first half of this comment.

    If you would recommend that book for someone almost totally unfamiliar with philosophy and its history, then I’d be happy to read it. Indeed, probably best to wait until after exams (my list of books to read is currently rather substantial anyway).

    Comment by Alex — April 11, 2009 @ 5:53 pm

  7. A conclusion that seems most agreeable is that morals are an approximation of optimality. Alex will post something on this topic shortly.

    Comment by alsuren — April 11, 2009 @ 11:49 pm

  8. I’m beginning to lose track of all threads going on here (not helped by the fact I”m reading all of this on a 7″ EeePC), but the one point I wanted to make was the difference between your different ideas of utilitarianism.

    Utilitarianism is normally seen as doing what’s best for the population as a whole. As I said before, the pure version is act-utilitarianism, where each individual makes a calculation for each situation what would be the response best for the population. Rule-utilitarianism is based on the fact that these calculations are too complicated to do for each act, and there are too many unknowns. The other reasoning given is that by setting rules that everyone is (expected) to follow, the number of unknowns decreases.

    David’s version of utilitarianism seems to be (although I could be simplifying it just so I can merge the different theories) that each person should do what is best (in terms of what makes them happiest) for themselves. This can also be seen as utilitarianism, if you take the presumption that each person does this. If each person in a population does what makes them happy, the population as a whole would be happier. Before I give my opinion on this (which you can probably guess), is this basically right?

    Mill, despite being a historical text, isn’t too hard to read, as it was designed as a pamphlet for public consumption. Kant is far harder to read, as Kant was writing for philosophers (and didn’t really have an idea of working with anyone else – he was a boring so-and-so).

    Comment by yoyomarules — April 12, 2009 @ 9:41 pm

  9. I really think this discussion should be paused until we have Alex’s post, but to answer your questions:

    “David’s version of utilitarianism seems to be (although I could be simplifying it just so I can merge the different theories[1]) that each person should do what is best (in terms of what makes them happiest[2]) for themselves…. is this basically right?”

    [1] The whole point of the Bayes’ Risk/reinforcement learning formulation is that it is completely general, and can be morphed into a statement of the categorical imperative by saying “assume everyone behaves like me”, or into utilitarianism by considering others in your cost function.

    [2] What is happiness to an arbitrary rational being? Let’s use utility/cost instead, as it can be defined to depend on the happiness/pain of others if we want to be utilitarians.

    So we’re reducing it to:
    “Each person should do what they expect is best for themselves, given their perception of the world.”
    This is a necessary condition for optimality, as far as I can see it, and also (I would hope) how everyone behaves anyway.

    Add “assuming that everyone else will behave the same way as I do.” and you are compatible with the categorical imperative.

    Add “and society” and you have the commonly accepted view of utilitarianism.

    It’s interesting that you say “by setting rules that everyone is (expected) to follow, the number of unknowns decreases.” regarding rule utilitarianism, because I would say that’s the categorical imperative simplifying assumption.

    Regarding the distinction between rule and act utilitarianism: I would bring up the reinforcement learning definition of a policy, which is “policy : state of the world -> action” or “A policy is a mapping from the state of the world to an action.”
    If the state of the world contains every observable thing, then you have act utilitarianism, but if you start to generalise, and pre-compute what your optimal actions are then you have rule utilitarianism.

    Now I’m interested in your opinion on this (as a minimum requirement for optimality)

    Comment by alsuren — April 13, 2009 @ 11:23 am

  10. Yeah, best to pause the discussion for now. Otherwise, you’re probably going to end up restating your views in response to my post.

    As a note, I’m probably going to completely leave the categorical imperative out of it, since without the addition of society, I do consider it mainly useless. It’s just going to include views on utilitarianism and optimality of morals (i.e. most of what we talked about). Feel free to add anything else in comments though, of course.

    Comment by Alex — April 13, 2009 @ 10:07 pm

  11. Cool, looking forward to it! I want to write something about it, but I suspect I’ll leave it until I get chance to reread Groundwork. Most of my ideas right now are coming from Mill instead, which may confuse things.

    Comment by yoyomarules — April 14, 2009 @ 10:08 am

  12. […] Philosophy, spinoza’s god, theory, utilitarianism | This post essentially follows on from the Notes on Kant post by David, which having prompted rather a lot of comments and one or two conversations, led to […]

    Pingback by The Optimality of Morals « Alex’s Blog — May 3, 2009 @ 11:30 pm

  13. […] post essentially follows on from the Notes on Kant post by David, which having prompted rather a lot of comments and one or two conversations, led to […]

    Pingback by Noldorin's Blog » Blog Archive » The Optimality of Morals - Musings on my countless projects, software, science, and various other curiosities — August 20, 2009 @ 8:08 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: