September 23, 2009

Food for food.

Filed under: facebook — Tags: , , , , , , , , — alsuren @ 12:48 am

I have been told that I should blog more often, and my software based blog posts will no longer be being shared on facebook, so something a little lighter might be more appropriate.

Those of you who know me well should be aware that I do like my food. Doesn’t always have to be expensive or the height of fashionable cuisine, as long as it’s interesting/satisfying.

A few meals that stand out recently:

Cream tea at Bea’s with Holly and her friend from home. If you’re looking for excess, then this is the place to go. They got pots of tea (plus extra hot water on request) a cupcake each, and a 3-layer platter of cakes between them for £8/person. I filched quite a lot, but most of it went in a doggy-bag.

I’ve been talking about Pizzaria Bel-Sit for a while now, but I’d not been there in years. It’s pretty much the best place to go for birthday parties. Back when I was little, I went there on my birthday and they gave me one of the staff-tshirts that that was too small for any member of staff they were planning on employing any time soon. Holly’s birthday presented the perfect opportunity to go there again. We had crispy-garlic-bread, which is wicked-cool (contrary to the comment of “Don’t order the garlic bread, it’s really nasty.” from the next table). There was a little girl in the queue in front of us whose parents had brought out for a birthday meal. Holly almost advertised that it was her birthday too, but she seemed a bit timid, so we hid the fact. Dessert took the form of baked alaska. I’d assumed that this would involve vanilla ice cream wrapped in something baked. How wrong I was. You don’t need to be honked at, and sung happy birthday to be impressed by the deserts at this place. A homemade mixture of rich ice cream flavours, sat on a slice od sponge and topped with flamed meringue.

At Edinburgh Lindy Exchange, notable food included Chocolate Soup, The Mosque Kitchen, and assorted cake provided by the locals. The Mosque Kitchen does simple curries and rice, served in paper bowls and eaten with plastic spoons while sitting at plastic garden tables. We need more places like this: good food that speaks for itself. Chocolate Soup does hot chocolate made with melted chocolate and semi-skimmed milk to make it taste richly of chocolate (contrast with the powder and cream approach of many places, which makes it just taste like fat). They also happen to make pretty nice soup (far nicer than that provided by EAT. on Monday in cambs.)


September 11, 2009

Telepathic Ramblings

Filed under: collabora — Tags: , , , , , , — alsuren @ 11:09 am

Okay, so I should probably inform people of what I’ve been working on recently.

A few of you might remember my cupsandstring Telepathy (IM) client from a few years back. Well I’ve recently revisited that, because someone was asking on IRC, so if you want to read the source, it won’t make you want to puke quite so much these days.

I made a skeleton Skype connection manager way back when too. (Don’t get excited: it’s based on the public Skype API provided by their UI, and only connects to your currently configured account). It never really got off the ground though. A few weeks back (go check the bzr log if you care) I picked it up again. I got it to the point where I could use it for text chat, and have been using it to replace the ugly bits of the Skype UI. It’s currently made of gaff and verbose, and has to be run in a konsole because I make it drop into a debugger whenever anything unexpected happens. This will continue to be the case until I have a decent testing framework for it.

Guess what my next project is: A testing framework! The spec has been discussed on the Telepathy mailing list. I’m trying to make it as generic as possible, so that I can try it on a well-tested connection manager (gabble) and then move on to butterfly, and then spyke when I’m feeling brave. Since I don’t want it to depend on the protocol too much, I’ve designed it around the idea of an echo bot (which can run remotely or locally) and a set of test scripts, which mostly just poke the echo service (like Skype’s echo123), and expect a sane reply. The bonus of this is that you can test interoperability between haze and gabble/butterfly/idle (and Kopete’s protocol code if it ever becomes telepathic) for free.

Due to all of my noise in Telepathy-related communication channels, daf suggested that I apply for a job at Collabora. I did so, and officially started on Wed. That was a fun day. I arrived in the office, and daf and wjt greet me with “Hey. We’ve thought what we want you to work on first: an echo service”. To which, I could only respond “What? Like this one?”, flashing my eee at them. Turns out their priority is something that works with gabble, and handles media streams (initially just using gstreamer’s audiotestsrc but eventually actually echoing stuff, and then doing things like disabling codecs). Hopefully I will get something workable by the time daf moves to America, but I would like to write a few automated tests for the text functionality before I start adding lots of crazy features.

Also, I’ve had to re-wrap most of the functionality from python’s telepathy.client library, because it contains too much magic, and is impossible to inherit from. I’ve tried to do it in a way that could feasibly be auto-generated from the spec, so that’s a project for when I’m done with my echo service. Maybe I’ll even push it into telepathy.client. It’s great to be working on Open Source software again. 🙂

June 12, 2009

Filed under: Uncategorized — Tags: , , , , , , , — alsuren @ 11:16 pm

Anyone who has ever written a quick-and-dirty demo script in python will probably have found themselves doing something like this at the bottom of their file:

if __name__ == "__main__":
    import sys

(I think everyone has done this at least once, so don’t try to deny it.) This is fine, but kinda screws your user over if they type “ –help”.

What we really want is something that will give us –help functionality, and all of the useful stuff that optparse gives us, without having to jump through the many hoops that optparse makes us jump through. I think it would be enough if we were able to do this at the bottom of our file and get everything to work:

if __name__ == "__main__":
    import commandline

Well it just so happens you can. Just easy_install commandline (if you have setuptools) or go to and download it by hand.

So what run_as_main() does is uses the inspect module to read the function’s docstring, and determine what arguments it takes. It then creates an optparse.OptionParser (using the information gathered from inspect). The OptionParser is then used to read sys.argv (this can be changed by passing an extra argument to run_as_main) and get a list of arguments to pass to the function. It then runs main with the right arguments by doing main(**kwargs).

So how do we use it? Well there is a simple example function included in, defined like this:

def example_function(   string1, 
    """This is just an example. You should really try writing your own
    commandline programs."""
    print string1, string2, int1

If we run it, and ask for help, we get:

$ python commandline/ --help
Usage: commandline/ string1 [string2 [int1]] [Options]

  -h, --help         show this help message and exit
  --string2=STRING2  default="something"
  --int1=INT1        default=1
  --string1=STRING1  This must be specified as an option or argument.

This is just an example. You should really try writing your own commandline

You can clearly see where everything from the function signature is going. I hope you agree that this is a pretty powerful time-saving device. If you have any comments/suggestions, give me a shout.

April 4, 2009

Notes on Kant

Filed under: Uncategorized — alsuren @ 12:55 am

These are a few notes from reading James W. Ellington’s translation of Kant’s “Grounding for the Metaphysics of Morals”. It’s mostly from stuff I wrote in my copy of the book, so if you want me to post page numbers/context for anything, shout. Most “quotes” are paraphrased either to represent my take on them, or for comic effect.

Intro (by Ellington): “This book is meant to be an introduction to Kant’s ideas. I will now proceed to run over them all in a summary as if you’ve already read his entire collected works.”… in an intro to an intro. Nice work.

Section 1: “From the Ordinary Knowledge of Morality to the Philosophical”

“A will has ‘moral worth’ due its motivations, rather than its actions.” (obviously, it is difficult to analyse this kind of thing externally, so you could probably infer from this a “judge not”-like statement.)

“There is a distinction between instinct and reason.” (which I’m not sure I agree with).

“Any action not explained by anything like self-interest will be explained be ‘duty’ and therefore have ‘moral value’.” (all other actions seem to be considered neutral)

At the end of this section, I have a few notes on the Categorical Imperative.
“You should always be able to desire that your policy should become the policy used by all.” (Kant’s formulation refers to a ‘maxim’ guiding an action, whereas I prefer to think of ‘policy’ guiding actions/choices. )

So when evaluating policy, I would say at this point that there are two approaches:
a) Pick the policy which gives you the highest expected reward if you follow it and everyone else acts normally (This will be morally neutral according to Kant).
b) Pick the policy which gives you the highest expected reward if everyone follows it (This will have ‘moral value’ according to Kant).
Kant seems to be wary of condemning any action to having negative moral value, but I’m not, so I’m going to say “Any policy which gives you really obviously poor expected reward in both of these cases is immoral.”

Section 2 “Transition from Popular Moral Philosophy to a Metaphysics of Morals”

“If it is useful, it can’t be said to have moral worth.”

There are a few examples of moral decision problem, which are evaluated under the categorical imperative. The charity example is the most interesting, as I suspect that you could probably make some extra assumptions so that giving to the poor becomes immoral under my formulation.

Also, it seems to be suggested that “I am not just a means to an end; I am moral so I am an end in myself” + categorical imperative => “I must not treat him as if he is just a means to an end” which I don’t think follows. Luckily, it’s possible to read the rest of the book without agreeing with this conclusion.

He then outlines the concept of a “kingdom of ends”, as in a community of ‘moral’ citizens (ones who follow the Categorical Imperative) The idea is that there is no need to have externally enforced laws, as each citizen legislates for himself by applying the categorical imperative to all of his decisions.
I think it’s an interesting thought experiment, and if anyone fancies running a simulation comparing the evolution of a kingdom of ends against a kingdom of nature (morally neutral, under my formulation) give me a shout.

He says that in the kingdom of ends, everyone acts as a “supreme legislator” (I agree) but he then says that they can’t be motivated by self-interest (I don’t agree: I think that being moral under *my* formulation provides a great simplifying assumption, so a self-interested party without infinite time for logical reasoning might expect greater rewards more quickly by acting morally)

He then goes on to introduce a concept of “reltive values” (“market price” for skills etc, and “affective price” for humour etc) and says that they are completely different from an “intrinsic worth” for morality.
My objection to this is that under his formulation, for a maxim to have moral worth, “it should be desired that it become the universal maxim”. I can only assume that it must be “desired” for a reason, namely that it would increase the availability of the aforementioned relative values. Therefore, “intrinsic” moral worth is still surely dependant on these relative values. I think you can either conclude this, or conclude that someone could desire the end of the world, and therefore be completely moral for going on a murderous rampage.

Somewhere towards the end of the second section, Kant starts punching holes in his own concepts, as I’d been waiting for him to do for half the book already. He mentions that there’s no way to construct a true kingdom of ends, as there is no incentive for people to behave morally.

He also refers to his universal impirtive as “synthetic”, which is philosopher for “I made this shit up. Maybe I’ll justify it some other time”

There is at some point a comparison between the categorical imperative and “do as you would be done by”. It’s essentially a less strict condition for morality, which removes the convicted man’s cry of “You wouldn’t want to be sent to prison. This is immoral.” I quite like that. Shame it’s a bit too woolly, because he’s trying to apply it to every possible situation. It’s also a shame that he he adds loads of questionable assumptions without justification in order to get the formulations of “Treat others as ends in themselves” and “Rational beings must legislate universal law”.

Section 3: Transition from Metaphysics of Morals to Critique of Practical Reason.

I don’t have so many notes in this section as I read it a bit more quickly. Quite a lot of it is just more picking apart and limitations of the Categorical Imperative, which I had already concluded was pretty useless for encouraging moral behaviour in *anyone* in Kant’s form, but might provide the groundwork for a nice simplifying assumption at least.

Also included was “On a right to lie because of Philanthropic concerns”, which I have just realised I haven’t read. Maybe I’ll edit this some other time to add my views on that.

February 25, 2009

Music Visualisation Project Update

Filed under: Uncategorized — alsuren @ 11:56 am

Okay, so this is a *very* long overdue post about my project. I have an hour before lectures, and I’m running a couple of computer jobs that will take a while. (edit: about half an hour)

As some of you will know, my 4th year (MEng) project is all about music visualisation. The idea is to create a system that will take MP3 files, and turn them into thumbnail images. Songs which sound similar should also *look* similar. The idea is that it should act as a visual memory aid for DJs.

Right at the moment, I have a “baseline” system, which produces images like
this. Looking at the images from the baseline system, there don’t seem to be many similar-looking images (If you see any other than The Fox and Christopher Columbus, post a comment below).

So what’s going wrong?

There are a lot of configurable parameters of the system, so it might just be that it needs tuning. If you want to compare the performance of the system with a few parameter changes, try exploring the matrix found here It might also be that I’m trying to pack too much information into each (very small) image. Currently I’m trying to squeeze 20 independent (scalar) pieces of information into each 20×20 image. What I need to try next is cutting down to the 3 to 10 pieces of information which are actually relevant, and making 50×50 images. I think I will also need to gather different pieces of information to include (initially extracted by hand, and then automatically extracted).

Also, the human eye is not very good at comparing brightness. I will try adding fake colour to the images, and see whether a different colour map performs better.

I’ll post something else this afternoon/evening. Have to run to lecture now.

January 24, 2009

Ideas for Programming Aids

Filed under: Uncategorized — alsuren @ 1:37 am

First off, it should be noted that I’m spending my friday night posting on my blog. Yes. There’s definitely something wrong with that. Also, I have just instinctively written this document in restructuredtext, and can’t be arsed to change it to html for wordpress. If anyone knows of blogging software that supports restructuredtext, I might be tempted to change once again.

The rest of this post is me trying to describe how I’ve recently found myself developing software, and how I think a software development/testing framework should behave in order to fit in with this methodology.

I write all of my code in python, making extensive use of ipython and scipy/matplotlib. Most of the things I write are to solve specific problems in a very throw-away manner (I basically use python in the same way most people might use perl/bash/matlab scripts).

The development method goes something like this:

* Open up kate, and add the standard boilerplate to a python file:

from __future__ import division
from sys import argv
import numpy as N

def main(arg1=”default value”, arg2=”something else”):
“”” Does whatever. “””

if __name__ == “__main__”:

* Start writing some code in kate, (generally top-down to solve the problem at hand, with functions missing).
* Implement a few of the lower level functions (without really thinking about speed or comments, but generally following most of PEP8.)
* Either %run in ipython.
* Fix missing/broken implementations/add “return result” to the end of some functions.
* %run and repeat until it does what you want it to do.
* If it’s taking too long, %prun main(some parameters that will take less time to compute results for) and check for the function with the highest cumulative time (say it’s called slow_function().
* Rename slow_function() to canonical_slow_function() and write another implementation with a few loops unrolled or something.
* Add “assert result_of_slow_function == canonical_slow_function()” to wherever the function is used.
* Reload, and %prun main() again (and notice that the new implementation takes 10x less time than the canonical one. Also feel safe that it’s numerically correct, since there is a built in regression test)
* Comment out the assert, and run again with the parameters that were taking too long before.

So what do I want from a rapid-command-line-development framework?

* A command-line program a bit like (or python paster or whatever) or a set of magics/functions that can be used from within ipython, that will create a single python file with the boilerplate for a simple command-line app, and open your default editor. It should look something like this (and already be executable on unix):

from alternatives import costs, reimplementation
from commandline import run_if_main

AUTHOR=”David Laban ”

def main(arg1=”default value”, arg2=”something else”):
“”” Does whatever. “””
for i in range(100000000000):
pass # waste time

def main(arg1, arg2):


* Running this program with no arguments should run main() with its default arguments (see below for more information on how this should be implemented)

* Running the program with the option “–help” or “-h” will print a usage message of the following form: [–arg1=”default value”] [–arg2=”something else”]
Does whatever.

* The admin script/ipython magics should also be able to do things like:
* script-admin commit -m “First revision of”, which will search for evidence of a revision control system, and then commit (running any tests that exist first, obviously, and prompting for any)
* script-admin set-defaults-from, which would take any modifications you make to the boilerplate variables, and save them somewhere useful, so that they become the defaults.
* script-admin create-manpage, which would generate a manpage with the author information at the bottom.
* script-admin create-bashcomp, which would create a bash completion file
* script-admin install /usr/local, which will install the script, and man pages to the prefix /usr/local.
* script-admin convert-to-project, which will take the script, and put it into a directory with a and its own revision control (if possible, keeping any revision history from the original file intact)
* script-admin test should run whatever tests are associated with

* The framework should always try to import the module and inspect it dynamically, rather than trying to parse python code. All of the above things can be done dynamically.
* run_if_main() may use stack introspection or whatever means necessary to make the API simple. It should look for a local or global variable in the calling scope called main, check whether it is callable, and inspect its arguments list. It should then use the optparse module or similar in order to forward command line arguments in to main. If no main function exists, it should look for objects of the form “command_*”, where * is the first command-line argument. It should then strip the first command-line argument, and act as before, but using command_* instead of main.

* script-admin should be written using the commandline module

* alternatives should focus on functions without side-effects. This means it can use concepts like invariants (haskell-inspired) for testing.

* A possible implementation of the alternatives module might do something like this:
* Run either the first or second implementation (or both) of main() with the default arguments taken from the first one (initially at random, but probably using something smarter once there’s some data ( I’m sure Alex will be first to suggest ) ).
* Optionally store statistics about the CPU time in ~/.alternatives/costs, so that the best implementation can be used in future.

* All decorators in alternatives should have a lightweight alternative implementation which doesn’t add any overhead to function calls or tracebacks, and only incurs a small penalty at define-time, when the choice of best implementation is made, and docstrings/default arguments are copied from the canonical implementation.

* alternatives should include a function/command called “test”, “train”, “sleep” or “dream”, which goes through all functions registered in ~/.alternatives, and test them against each other (possibly using cached input values from real runs, and possibly using the coverage testing methodology). This should run as a separate process at nice 19 or something. There should also be a function/command called “report”, which produces either profiler-like output data, or matplotlib-quality graphs for each of the functions.

* If the @alternatives.costs.cputime decorator is used, then the return values should always be the same, and the implementation which returns in the shortest time should be chosen. It should be possible to create other cost functions that take the return value as in input, and return a score. This might be useful for evaluating alternative implementations of things like video compression or stochastic optimisation schemes.

So these are two things that I think would be useful on a reasonably regular basis. I probably won’t be thinking about implementing such things any time soon though, but if anyone can think of anything that already does what I want, please tell me.

November 19, 2008

Dance Pointers

I might have called this post “Dance Philosophy”, or “Best Practices”, but that would imply that I have checked it for self-consistency and so-forth. This is a set of beliefs/observations that I have about leading, and I am completely happy if other people don’t hold the same beliefs, but I will defend my beliefs if attacked (maybe “Dance Theology” would be more appropriate). I make no comments about which bits apply to following as well.

1) If you are having trouble dancing with a follow (of any ability), then a good solution is *always* to become a better lead.

1.1) Telling a follow that she’s doing something “wrong” is *always* the wrong thing to do.
1.1.1) In most cases, she can feel that something’s wrong, and so doesn’t need to be told.
1.1.2) You will (in most cases) be incapable of explaining it in words.
1.1.3) In most cases, it’s because you’re doing it wrong yourself.
1.1.4) If you can lead the difference between doing it “wrong”, and doing it “right”, then it will be more effective than words (see point 1).
1.1.5) Telling someone that they should *not* do something is dangerous. What you want someone not to do may be exactly what someone else wants them to do in some situation. If you can show people alternatives, and let them pick for themselves, this will reduce the number of people doing the “wrong” thing, without removing their ability to do so if the need arises.

1.2) If you find yourself wanting to “know” more “moves”, then this is a sign that your understanding of the things you already “know” is not strong enough.
1.2.1) It is more enjoyable to play with the subtleties of a few moves than a lot of different moves done in the same way.
1.2.2) If there are 7 independent layers of lead/follow (ask Andrew Sutton for a list and he will consistently produce at least that many (though they may not always be the same 7)) then you have at least 2^7=128 variations on each move. If you get bored with 2^7, then try 3^7 and so on.

1.3) If you are getting a lot of awkward moments, then you need to go back to basics.
1.3.1) If your connection is broken, you will be impossible to follow. If your frame is too weak then your follow will not be able to feel what you want her to do. If your frame is too rigid/jerky, it will break your follow’s frame. Similarly, if you break your frame by doing an awkward arm lead, then it will break your follow’s frame. If you have too little tension, your follow may not know when you want her to move (especially problematic with fast music). If you have too much tension, then your follow will feel too forced, and have no freedom to do her own thing (especially in slow music, and music she knows well). This is often referred to as a lack of responsiveness. All of the above things affect all of the above things
1.3.2) Having “just enough” control to reliably lead what you want is ideal. It is important to find out what “just enough” is for every follow.
1.3.3) If you are doing a lot of different “moves” then you both need to “know” them and get them “right” in order to avoid conflicts.
1.3.4) If you are doing lots of subtle variations on basic moves, then there will be no conflict if your follow ignores them. If there is a conflict, then your “subtle variation” is not a subtle variation.

To be continued, I suspect.

If someone wants to expand this onto a wiki somewhere, please do so. I’m thinking that each of the points should be the title of a page, with each page being a stub with a “Consider first” link pointing to its parent, “Consider also” links pointing to its siblings, and “Consider next” pointing to its children. It would then be possible to flesh out each page with an “Examples” section.

November 17, 2008

209 Radio

Filed under: Uncategorized — Tags: , — alsuren @ 1:13 am

Over the past few years, I haven’t been listening to much radio, because my music collection has been so large (especially since finding

Recently though, since breaking my decent phone, I have been listening to the radio more. After a little channel-hopping, I have discovered 209 community radio. I’ve been listening to it today, and it has been made entirely of yay. This is likely to be because it’s the weekend, and therefore full of real people. If you look at the schedule then you will see that this is not always the case.

The website also has links to an archive, and all content is available as both mp3 and ogg. I suspect that this touch was an ad-lib by whichever linux geek set up the website. (It really does look like it’s become someone’s personal playground, with a mixture of cool features and questionable style)

November 14, 2008

Versioned Home Directory, and other Ideas for Projects.

I have started using version control (bzr) on my home directory. This hopes to eventually solve a few problems:

1) Sharing settings with other people. This is something that I’ve been looking for a solution to for a while (there are standard ways to share apps and themes ( and pals) but not configs. If everyone keeps their configs versioned, then it should be possible to cherry-pick changes more easily.

2) Creating consistant settings across many different linux machines (as discussed in my colourhash post. (side note relating to colourhash: many graph plotting programs have ways of automatically assigning distinct colours. I will look at that at some point too.)

3) As a backup framework: If I have all of my settings under distributed version control on 4 machines, then when I accidentally delete large chunks of my home directory (like the other day, when cmake created a folder called ‘$HOME’ which I wanted to delete…) then I don’t lose all of my rss feeds, proxy settings (which I stil haven’t managed to get working again, thanks to KDE’s incredibly fragile socks[lack-of] implementation/configuration) and email settings (resulting in me not being alerted about emails for 2 days(>20 emails))[/rant]

The progress so far is a ~/.bzrignore file as long as your arm ).

Eventually, I plan to host it on launchpad (as soon as I’ve verified that it doesn’t contain any security-critical information (I don’t think I have anything else that I would have a problem hosting on launchpad. Reading the content of my other posts might give you an idea of my views on privacy.))

==== Technobabble-filled braindump below this line ====

If anyone knows how to do nested repositories (eg. so I can get bzr to manage my ~/src as well, and so that I can have sensitive information like ~/.ssh and ~/.gpg versioned in some way that lets *me* merge them between computers, but doesn’t expose them on launchpad) give me a shout.

In other news, I taught myself a bit of perl last night, when trying to add sed-style text replacement to pidgin (by hacking apart script called whose interface was arbitrarily horrible, and only allows output replacement). I’m currently fighting with pidgin’s settings management to get persistent rules. If anyone wants it, get in touch. Otherwise, it will be in launchpad under ~/.purple/ when I get my home directory on there :D.

If anyone wants to give me input on the interface, it would be muchly appreciated. Currently, we have:
/sed foo-to-bar s/foo/bar/g
(to add a replacement rule)
/sed foo-to-bar s///
(to replace the old rule with an existing rule)

I’m thinking something more like:
/sed s/foo/bar/g
(to add a rule; a number will be assigned to each rule as an identifier)
/sed -l
(to list all rules, and associated numbers)
/sed -d #number
(to delete rules)
/sed -o s/foo/bar/g
(to only correct outgoing text)
/sed -i s/foo/bar/g
(to only correct incoming text)

Unfortunately, perl is a *horrible* language (doesn’t even have a concept of named function arguments) so the resulting code is unlikely to be anything I’m proud of.

While I think about it, I also had a load of ideas for python-based projects:
A man-page parsing command-line completion handler for ipython (and possibly bash, but bash scripts take so much longer to debug, and I get the feeling I will soon be using ipython as my default shell anyway.)
Given that debian policy forces all commands to have a man page, this is a pretty reliable way to write a powerful tab-completer. Also, since you only ever read the man page when the tab completion doesn’t work, you might as well get the tab-completer to read the man page for you.

A callback/decorator library for creating command-line programs, with an interface along the lines of:

@clargs.handles("-f", "--filename")
def input_filename(filename):
    """The filename you want to read."""
    global input = filename

@clargs.arguments("REPEATS", int)
def main(repeats):
     """Reads FILENAME to stdout REPEATS times."""
    text = open(input).read()
    for i in xrange(repeats):
        print text

if __name__ == "__main__":

It should also auto-generate help and man pages using the information given. An even more fun thing to do (with python3000) would be to use nose-style runtime inspection to to detect a function of the form:

def handle_filename(name:str):
    "The filename you want to read."""
    global input = filename

and make that handle –filename input.txt (maybe with an @shortopt(‘f’) decorator.

A subclass of numpy.ndarray that has named axes, and user-specified ranges, so…

likelihoods = semantic_array( ('t', 1000), ('x', -100, 100), ('v', -10, 10) )
# sum over v, and preserve the x and t axes.
position_likelihoods = sum(likelihoods, axis='v')
# get the best guess of x for each time t.
maximum_likelihood_x_estimate = argmax(position_estimate_probs, axis='x') 

A delayed evaluation library (might end up stealing a lot of ideas from scipy and sympy, with a good chunk of twisted to boot)
An interesting feature of python is that it doesn't have an assignment operator is *purely* a pointer-update. When you say "x=y" it just makes x point to the same thing y points to. This means that if you get passed x into a function, you can safely write x = x*10, and it won't modify x in the code that called you. This lack of side-effects (and all manner of other things) makes many python libraries look like pure-functional libraries.

On the down-side, it won't let let you override the assignment operator, so when you're dealing with large amounts of data, you can't re-use arrays without jumping through hoops. If X is a 1000000x1000000 matrix, your choices are:
X = multiply(X,10) # The canonical form, but it creates a temporary variable for the return value which is alive at the same time as X (potentially taking up twice as much memory as needed)
multiply(X, 10, output=X) # The numpy interface (potentially does the multiplication in place)
X *= 10 # works in-place, and is all very good, but what if I want to do something that's not +=,-=,*= or /=?

Then there are the slightly more hacky options, which involve delayed evaluation:
X == multiply(X, 10) # Override the logical equals operator X.__eq__ (the sympy method for writing symbolic equations). This is the most horrible, because it stops you being able to do X = (X==Y)
context.X = multiply(context.X, 10) # Override the attribute assignment operator context.__setattr__
X[:] = multiply(X, 10) # Override the item assignment operator X.__setitem__  (or sometimes X.__setslice__, I think)
Note that this last one feels quite like fortran, but it might be the least horrible of all the interfaces.

So how would these things work? A simple sympy-style one looks like this:
def multiply(X, Y):
    def deferred_calculation(output):
        numpy.multiply(X, Y, output=output)
    return deferred_calculation

# In X's class definition:
def __eq__(self, deferred_calculation):

The problem is that if you accidentally do X = multiply(X,Y) then X is just the deferred_calcuation function. That’s not very useful. On the other hand, if the returned “deferred_calculation” object can be made to behave like a numpy array, then you’re in for a win.

The fun stuff will start to happen when you start using these “deferred_calculation” objects, and passing them in and out of other functions, so you have a massive chain of deferred calculations. If you then include an interface for inspecting chains of deferred objects, you can start to write deferred-to-$LANGUAGE compilers, which would let you write “say what you mean” algorithmic code in python.

A way of writing twisted applications in a blocking style (using generator expressions).
This idea is in some ways quite similar to the idea above. I’m sure I sketched an implementation up somewhere (possibly on the eee), but I don’t seem to have posted about it. The jist of it is as follows:

In the example below, unblockify (implementation omitted) acts like a filter in two ways:
From the caller’s point of view, some_generator produces a sequence, but only things that aren’t deferreds get let out of the filter to the caller.
From the generator’s point of view, “yield” acts like a filter (or in compsci terms, a “map”). Any “deferred” objects sent through it get turned into real objects, and any real objects sent through it disappear. I’m still deciding whether to do something magical when None gets yielded. We’ll see.

def some_generator():
    while True:
        result_of_deferred = yield function_which_returns_deferred()
        yield some_immediate_function(results_of_deferred)

for out in some_generator():
    print out

While this program appears to be blocking, it shouldn’t cause unresponsiveness in GUI applications. This is because filter passes control to the twisted reactor when it’s waiting for each deferred function.

If anyone is interested in any of these projects, please shout.

October 27, 2008

Colour Prompt

Filed under: Uncategorized — Tags: , , , , , , , , — alsuren @ 9:25 pm

Inspired by konversation’s pretty colours for users in an IRC channel, and pissed off with forgetting which of the many (smaug, soup, pip, concorde, harrier, excalibur, telford, dl325) linux boxes I’m remotely logged into at any given time, I decided to assign a different prompt to each host. Since some of the machines share home directories but not installed software, hard-coding a colour into each machine’s ~/.bashrc wouldn’t hack it.

That’s enough talking… I present to you “colourhash”.

# Save this file as 'colourhash', and put it somewhere in your $PATH
# Usage: colourhash [string] ; echo message
# Wherever you see a colour escape sequence in your ~/.bashrc (like 33[01;32m (bold green))
# Replace it with '$(colourhash)'
if [[ -n $@ ]]
# We have command line arguments
# We don't have command line arguments. Create a colour for this user@host combo
echo -n ${string} | cksum | (read hash tail
# We only really care about the first word printed by cksum: a crc hash of $string.
# Also note that crc is not a cryptographic checksum. This is not important, since we
# only have 7 colours to pick from, so hash collisions will be frequent.
# The readable colour codes lie between 31 and 37
colour_code=$(( 31 + ($hash % 7) ))
# Magic escape sequence follows:
echo -en '33[01;'${colour_code}'m'

Right now I’m only picking one of 7 colours. Who fancies doing the birthday problem on that?

If I end up getting too many hosts the same colour, I will start thinking of ways to increase the number of colours available. There are potentially lots of colour combinations to pick from, but changing the background colour might look be a bit odd. If anyone wants to wrap this up and push it into ubuntu/gentoo, give me a shout.

This is a good reference for anyone adding colours to shell programs. Code to generate the above grid can be found here.

« Newer PostsOlder Posts »

Blog at