Guidelines for fellowship application

Here are just a few notes on the stages to go through for fellowship application (NOTE: this is a work in progress and some stages are only relevant if you are working in the School of Physics & Astronomy at the University of Glasgow):

  1. Read fellowship guidelines (e.g. here for STFC Rutherford Fellowship, or here for Royal Society University Research Fellowship, or here for the RSE/Scottish Government Personal Research Fellowship). Make sure to note the project deadline and any specific details of how much equiqment/travel expenses can be claimed, how many references are required and how the references should be submitted.
  2. Discuss applying with line manager/research group leader/school head (either locally or at the institution that you wish to apply at). Note that for some fellowships e.g. the STFC Rutherford Fellowhip, schools/departments have maximum quotas of applications that are allowed, so discuss whether you can be one of the quota.
  3. Download and fill in a “costings request form” from R&E. Fill in:
    • Funder Name
    • Project type/scheme
    • URL of the scheme (see above)
    • Project start date (generally 1st October of the following year)
    • Duration (generally 5 years)
    • List yourself as the PI and put 100% for %age split and %age of time (provided that this is true for the application!)

    Remember to fill in travel and equipment budget on second page (the amounts are based on what can be applied for in the fellowship, but think a bit about how you want to split this rather than just putting the maximum possible value in for each year). Email the form to

  4. Start the application, which often is through an online application form. For the UK research council applications you apply via the Je-S system, and the for Royal Society you apply through the e-GAP system, so if you have not applied before create a new account.¬†For the Je-S system once logged in click on “Documents” and then under the “Create” section click “New document”, you can then select the fellowship you want to apply for. For the e-GAP system once logged in click on “Schemes” find e.g the Royal Society URF scheme and click apply.
  5. R&E should send you a draft version of your Project Approval Form (PAF) [which contains all the fellowship costings]. Once you have checked this over they will send you a final version. Print this out and take it to one of the school’s research coordinators (currently Sheila Rowan and Andy Harvey) who can double check it and initial it. Once that is done take it to Lynne Stewart to be signed by the head of school (currently Martin Hendry). It finally needs to be signed by the college, so you can either ask Lynne to pass it on, or take it to the college office in the Boyd Orr Building. Once signed it can be returned to R&E.
  6. R&E need to fill in the finance details of your application. For the online forms you can share the application with R&E staff. For the e-GAP process once in your application click on the “Share Application” button and then “Add Share” – enter the email address and then share the relevant section with “Ms Grants Managers”(!). For the Je-S system hover over the “Document Actions” button and click “Administer User Access” then add an editor using the email address of the R&E person you have been dealing with.
  7. Generally applications require two major sections: a research proposal (e.g. what you’re going to do during the fellowship) and a lay report (e.g. background and context of your field and summary of your research project in laymans/non-specialist terms). Read the fellowship application guidelines to find out requirements for page length/word (or character) counts, and font/font size/border size for these. Often these can either be entered directly into the webform or uploaded as pdfs (which is generally the best way to go about it). Currently for the Royal Society URF the research proposal is a maximum of two sides of A4 with minimum font size of 10pt Arial. For the STFC fellowship the research proposal is a maximum of three sides of A4 with a minimum font size of 11pt Arial and maximum border sizes of 2cm.
  8. Get someone, or multiple people, to independently read over your application. Feedback is always useful. For lay reports it is especially useful to get someone from out of your field to check it and make sure it’s exciting and lacks technical jargon.
  9. Submit the form with adequate time to spare. Once you submit email R&E as the application’s not actually submitted until they have signed it off. Give them time to check for any mistakes/omissions, so you can add/correct them. DON’T wait until 5pm on the deadline day to submit.

The log of a determinant in Matlab

If you’re calculating log likelihoods (for example) it may be that you have a large covariance matrix for which you want to calculate the log of its determinant. In many cases the actual value of the determinant of the matrix may be to large for computational precision to handle, and in e.g. Matlab det(A) might return an Inf – this in turn would give an Inf for the log of the determinant. The solution, if you’re just wanting the log likelihood anyway, is given by this post, but I repeat the main Matlab commands (Note: the covariance matrix must be positive definite for this to work), for a covariance matrix A:

L = chol(A);
logdetA = 2*sum(log(diag(L)));

Matlab MultiNest

Joe Romano and myself have written an implementation of MultiNest in Matlab, which (after a while of sitting just on our computers) is now available as a Beta release from the MultiNest repository.

This can be used to perform evidence calculation and posterior parameter estimation using nested sampling for a given dataset, likelihood function and model function. The release includes examples of how to set up these functions for use with the nested sampling routine. The code can use the MultiNest algorithm (implementing the method in “Algorithm 1″ of Section 5.2 and Section 5.3 of Feroz, Hobson & Bridges, MNRAS, 398, pp. 1601-1614, 2009, but not the extra steps in Sections 5.4 through 5.7) to draw samples from the prior volume. Or, it can use an MCMC method to draw new samples (based on the method proposed in Veitch & Vecchio, PRD, 81, 062003, 2010), where the proposal is a Student-t distribution with 2 degrees of freedom defined by the covariance of the current “live points” (and also using differential evolution 10% of the time to aid hopping between modes in the likelihood).

These functions haven’t undergone much testing or been optimsed for efficiency, so if you use them and have any feedback please let me know.

The code has a license allowing free academic use and modification.

gsissh for Ubuntu

The Globus package containing gsissh (and its scp/ftp-ing brethren) isn’t in the standard Ubuntu repositories. In the past I’ve had to install bloated packages to use it (e.g. within the LIGO Scientific Collaboration we have the LDG Client, which contains many things that the average user doesn’t need). Relatively recently I found the stand-alone package from globus that I need to just get gsissh: gsi-openssh-clients (the link takes you to the latest stable v5.2 version for Ubuntu 12.04, but if you edit the link appropriately [go down to the /gt5 part] you can find the older version, or verions for older Ubuntu/Debian distros, and also rpms for Centos/Fedoro/etc). From the link download the appropriate .deb version of gsi-openssh-clients***.deb and install it with dpkg. Alternatively, and maybe more sensibly, add e.g.

deb precise contrib

to your /etc/apt/source.list file and install via

sudo apt-get update; sudo apt-get install gsi-openssh-clients

This is mainly here to remind me of the location as I’ve been asked about it a couple of times and often forget.

Note: to use gsissh you will need to install some globus packages that are within the standard Ubuntu repositories e.g. sudo apt-get install globus-proxy-utils (which will install most, if not all, of the required dependencies)

Also note: a lot of the time when gsissh-ing you need to have the bundle of X509 trusted certificates (often installed in /etc/grid-security/certificates). If you don’t have this then you can download the latest osg-ca-certs .deb package from here and install it using e.g.:

sudo dpkg -i osg-ca-certs_1.31NEW-0_all.deb


Why gravitational wave interferometers work

Mainly for my own benefit here’s a link to Amber Stuver’s Living LIGO blog answering why interferometers work as gravitational wave detectors given that GWs will also alter the wavelength of light as well as the interferometer arm length. I’m not going to rehash the answer, so go there for it (or for a rather more technical answer see arXiv:gr-qc/0511083 – which is a paper I found ages ago, but then forgot about until I saw Jorge Pullin’s comment on Amber’s post), but will just say that it’s something that’s easier to think about when using pulsar timing as your detector, because for that you actually do have the pulsars producing clock ‘ticks’.

Creating TEMPO-style parameter files directly from the ATNF catalogue

The ATNF catalogue is a very useful way to get information on pulsar parameters, but I wanted to be able to download the parameters in the style of TEMPO par files for some subset of pulsars based on a condition. To do this I’ve created a python script to query the ATNF catalogue for all information for all pulsars given a certain condition e.g. “f0 > 10″ and output them in a .par file. This script (dowloadable here) requires the BeautifulSoup python package. An example of it’s use is:

./ -c “f0 > 10″ -o pulsardir

where the -c specifies the required condition for the returned pulsars (in this case only return ones with a frequency greater than 10 Hz) as described here, and the -o gives the output directory for the par files. All files will be output using the pulsar’s “J” name. By default the script returns all parameter, i.e. “long” format, but a shorter “short” format can be output using the -s argument – see Section 3.4.1 of the ATNF catalogue documentation.

A properly normalised histogram

I’ve created a Matlab function that properly normalises a histogram with an upper and lower boundary. The main reason for this is that if you have a relatively small number of bins and try to normalise using the trapezium rule just using the area under those bins then you’ll often slightly underestimate the area. This can also be particularly prevalent if your histogram is up against a hard bound, but with a large value e.g. imagine a histogram of positive data peaked close to zero. My function just adds bits on the edges of the histogram data to try and mitigate this effect a bit. Here it is (hopefully self-explanatory):

function [n, x] = histnormbounds(y, m, low, high)

% [n, x] = histnormbounds(y, m, low, high)
% Create a normalised historam (normalised using the trapezium rule to
% calculated the area). The low and high values are the boundaries of the 
% histogrammed data, and ensure that the histogram gets properly normalised
% if close to the edges of the boundaries. m is the number of histogram 
% bins or a vector of bins values.

% set default bounds if none are set (low = -inf and high = inf)
if ~exist('low', 'var')
    low = -inf;

if ~exist('high', 'var')
    high = inf;

% check that data is within bounds
if min(y) < low && max(y) > high
    error('Data is outside of bounds');

% get histrogram
if isscalar(m)
    if m < 2
        error('Need more points in histogram');
    [n, x] = hist(y, m);
    if length(m) < 2
        error('Need more points in histogram');
    n = hist(y, m);
    x = m;

botbinwidth = x(2)-x(1);
topbinwidth = x(end)-x(end-1); % just in case x isn't uniform

% if histogram points are not close to boundaries (i.e. within a bin of the
% boundaries) then add zeros to histrogram edges
if x(1)-botbinwidth > low
    n = [0, n];
    x = [x(1)-botbinwidth, x];

if x(end)+topbinwidth < high
    n = [n, 0];
    x = [x, x(end)+topbinwidth];

% if the histogram is closer to the boundary edge than the bin width then
% set a new bin on the boundary with a value linearly extrapolated from the
% gradiant of the adjacent points
if x(1)-botbinwidth < low
    dx = x(1) - low;
    x = [low, x];
    dn = n(2)-n(1);
    nbound = n(1) - (dn/botbinwidth)*dx;
    n = [nbound, n];

if x(end)+topbinwidth > high
    dx = high - x(end);
    x = [x, high];
    dn = n(end)-n(end-1);
    nbound = n(end) + (dn/topbinwidth)*dx;
    n = [n, nbound];

% normalise n by area
n = n/trapz(x, n);