Plotting and analysis tools

Contents:

A collection of tools, tips, and tricks.

2009-07-20 22:36 IJC: Created

2010-10-28 11:53 IJMC: Updated documentation for Sphinx.

2011-06-15 09:34 IJMC: More functions have been added; cleaned documentation.

tools.addobj(obj1, obj2, exclude='TBD')[source]

Combine fields in two objects with the same attributes. A handy function!

Inputs :

obj1, obj2 : objects of the same kind

Returns :

obj12 : new object, with fields of 1 and 2 combined.

OR

-1, if the objects have absolutely no attributes in common.

Notes :

Methods/attributes beginning with an underscore (_) are not combined.

tools.areaOverlappingCircles(r1, r2, d)[source]

Return overlapping area of two circles. From Wolfram Mathworld.

r1, r2 are the radii of the circles.

d is the distance between their centers.

tools.array_or_filename(input, kw_getdata={}, kw_array={}, noneoutput=None)[source]

If input is a Numpy array, return it. If it is of type str, use Pyfits to read in the file and return it. Keyword options are available for the calls to pyfits.getdata and numpy.array. If input is None, return noneoutput.

tools.blam(T, lam)[source]

Planck function in wavelength.

Inputs :
T : scalar or array

temperature in Kelvin

lam : scalar or array

wavelength in microns

Value returned is in (nearly) cgs units: erg/s/cm^2/micron/sr

tools.bnu(T, lam)[source]

Planck function in frequency.

Inputs :
T : scalar or array

temperature in Kelvin

lam : scalar or array

wavelength in microns [but intensity will be per Hz]

Value returned is in cgs units: erg/s/cm^2/Hz/sr

tools.combinations(input_list)[source]

Return all possible combinations of the elements in an input sequence. The last returned element will be the empty list.

E.g., combinations([0,1]) returns [[0, 1], [0], [1], []]

Taken from the internet:
http://desk.stinkpot.org:8080/tricks/index.php/2008/04/get-all-possible-combinations-of-a-lists-elements-in-python/
Requirements :copy
tools.cplim(a1, a2)[source]

Copy axis limits from one axes to another.

Inputs :a1, a2 – either (1) handles to axes objects, or (2) figure numbers. If figures have subplots, you can refer to a particular subplot using decimal notation. So, 1.3 would refer to subplot 3 of figure 1.
Requirements :matplotlib (when this is written...)
tools.dcf(t, x, y, zerolag=True, nbin=11, binwidth=None, bins=None, prebin=None)[source]

Compute the Discrete Correlation Function for unevenly sampled data.

If your data are evenly sampled, just use numpy.correlate()!

Inputs :

t – (1D sequence) - time sampling of input data.

x, y – (1D sequences) - input data.

Note that t, x, y should all have the same lengths!

Optional INPUT:

zerolag – (bool) - whether to compute DCF for zero-lag datapoints.

nbin – (int) - number of computed correlation/lag values to average over

binwidth – (float) - width of bins to average over, in units

of t. If set, this overrides nbin.

bins – (sequence) - edges of bins to use in averaging DCF

values. If set, this overrides both nbin and binwidth.

prebin – (scalar) - factor by which to bin down initial data

and lag pairs. This translates into a speed boost of about this same factor.

Returns :

meanlags – average lag in each averaged bin

rdcf – DCF value in each averaged bin

rdcf_err – uncertainty in the DCF value for each averaged bin

See ALSO:

numpy.correlate()

Requirements :

Numerical analysis routines, numpy

tools.dict2obj(dic)[source]

Take an input Dict, and turn it into an object with fields corresponding to the dict’s keys.

tools.drawCircle(x, y, radius, **kw)[source]

Draw a circular patch on the current, or specified, axes.

Input :

x, y – center of circle

radius – radius of circle

Optional INPUT:

ax – Axis to draw upon. if None, defaults to current axes.

**kw – options passable to matplotlib.patches.Circle()

Note :

Axes will NOT auto-rescale after this is called.

tools.drawEllipse(x, y, width, height, **kw)[source]

Draw an elliptical patch on the current, or specified, axes.

Input :

x, y – center of ellipse

width – width of ellipse

height – width of ellipse

Optional INPUT:

ax – Axis to draw upon. if None, defaults to current axes.

**kw – options passable to matplotlib.patches.Ellipse()

(angle, linewidth, fill, ...)

Note :

Axes will NOT auto-rescale after this is called.

See_also :

drawCircle(), drawRectangle()

tools.drawPolygon(xy, **kw)[source]

Draw a rectangle patch on the current, or specified, axes.

Input :

xy – numpy array of coordinates, with shape Nx2.

Optional INPUT:

ax – Axis to draw upon. if None, defaults to current axes.

**kw – options passable to matplotlib.patches.Polygon()

See ALSO:

drawRectangle()

Note :

Axes will NOT auto-rescale after this is called.

tools.drawRectangle(x, y, width, height, **kw)[source]

Draw a rectangle patch on the current, or specified, axes.

Input :

x, y – lower-left corner of rectangle

width, height – dimensions of rectangle

Optional INPUT:

ax – Axis to draw upon. if None, defaults to current axes.

**kw – options passable to matplotlib.patches.Rectangle()

NOTE:Axes will NOT auto-rescale after this is called.
tools.ee_psf(psf, energy=0.80000000000000004, center=None)[source]

%% Determine the diameter in pixels for a given Encircled Energy % % [d, ee] = ee_psf(psf, energy) % [d, ee] = ee_psf(psf, energy, center); % % INPUTS: psf - array representing a point spread function % energy- encircled energy percentage (default = 0.8). % Can also be a vector of values, in which case the % outputs ‘d’ and ‘encircledenergy’ are also vectors. % OPTIONAL INPUT: % center- [x y] coordinates of the center of the psf. If not % passed, defaults to the coordinates of the maximum- % valued point of the psf array. % % OUTPUTS: d - diameter of circle enclosing ‘energy’ [pix] at the % corresponding desired encircled energy value % ee - encircled energy at computed ‘d’ at the % corresponding desired encircled energy value %function [d, encircledenergy] = ee_psf(psf, energy, center)

tools.erf_approx(z, N)[source]

Weideman 1994’s approximate complex error function.

Inputs :

z : real or complex float

N : number of terms to use.

Notes :

returns w(z) = exp(-z**2) erfc(-1j*z)

tools.errxy(x, y, xbins, xmode='mean', ymode='mean', xerr='minmax', yerr='sdom', clean=None, binfactor=None, verbose=False, returnstats=False, timing=False)[source]

Bin down datasets in X and Y for errorbar plotting

Inputs :

x – (array) independent variable data

y – (array) dependent variable data

xbins – (array) edges of bins, in x-space. Only x-data

between two bin edges will be used. Thus if M bin edges are entered, (M-1) datapoints will be returned. If xbins==None, then no binning is done.

Optional INPUT:
xmode/ymode – (str) method to aggregate x/y data into datapoints:

‘mean’ – use numpy.mean ‘median’ – use numpy.median ‘sum’ – use numpy.sum None – don’t compute; return the empty list []

xerr/yerr – (str) method to aggregate x/y data into errorbars

‘std’ – sample standard deviation (numpy.std) ‘sdom’ – standard deviation on the mean; i.e., std/sqrt(N) ‘minmax’ – use full range of data in the bin None – don’t compute; return the empty list []

binfactor – (int) If not None, average over this many

consecutive values instead of binning explicitly by time-based bins. Can also be a sequence, telling the number of values over which to average. E.g., binfactor=[10,10,20] will bin over the first 10 points, the second 10 points, and the next 20 points.

clean – (dict) keyword options to clean y-data ONLY, via

analysis.removeoutliers, with an additional “nsigma” keyword. See removeoutliers for more information. E.g.: clean=dict(nsigma=5,remove=’both’,niter=1)

Outputs :

a tuple of four arrays to be passed to matplotlib.pyplot.errorbar: xx – locations of the aggregated x-datapoint in each bin

yy – locations of the aggregated y-datapoint in each bin

xerr – x-errorbars

yerr – y-errorbars

Example :
x = hstack((arange(10), arange(20)+40))
y = randn(len(x))
xbins = [-1,15,70]
xx,yy,xerr,yerr = errxy(x,y,xbins)
plot(x,y, '.b')
errorbar(xx,yy,xerr=xerr,yerr=yerr, fmt='or')
Notes :
To just bin down uncleaned data (i.e., no ‘error’ terms

returned), set clean, xerr, yerr to None. However, when computing all values (xerr and yerr not None) it is faster to set clean to some rediculous value, i.e., clean=dict(niter=0, nsigma=9e99). This probably means more optimization could be done.

Be sure you call the errorbar function using the keywords xerr

and yerr, since otherwise the default order of inputs to the function is (x,y,yerr,xerr).

Data ‘x’ are determined to be in a bin with sides (L, R) when

satisfying the condition (x>L) and (x<=R)

See ALSO:

matplotlib.pyplot.errorbar, analysis.removeoutliers()

Requirements :

numpy, Numerical analysis routines

tools.extinct_cardelli(lam_um, RV=3.1000000000000001, warn=True)[source]

Compute the Cardelli et al. 1989 A_lam/A_V extinction.

Intputs :
lam_um : float or Numpy array

wavelength desired, in microns

RV : float

R_V extinction parameter

warn : bool

If True, print a warning if wavelength is outside the valid range.

Outputs :
extinct : float or Numpy array

extinction – A_lambda/A_V

Notes :

This is only valid for wavelengths in the range 0.3 - 3.3 microns!

tools.extractSubregion(fitsfilename, corners=None, dx=None, dy=None, kw='subreg', retall=False)[source]

Extract a specified rectangular subregion from a FITS file.

Inputs :
fitsfilename : str

Name of the (2D) FITS file.

corners : str, 4-sequence

if sequence: [x0, x1, y0, y1], corners of subregion.

if str: header keyword containing this sequence.

dx : None, 2-sequence

If sequence: [x0, x1] will become [x0-dx[0], x1+dx[1]]

dy : None, 2-sequence

If sequence: [y0, y1] will become [y0-dy[0], y1+dy[1]]

kw : None, str

If str: this header keyword will be updated with the new corners (possibly modified by dx, dy)

Outputs :

(subregion_data, [fitsfile_header, corners_used])

If the specified header keyword is not found, or the specified corners return an error, then this function will crash inelegantly.

Note :

WCS headers will not be updated, so be careful when using this routine for imaging data!

tools.fconvolve(a, v, oversamp=2)[source]

Returns the discrete, linear convolution of 1-D sequences a and v, using Fast Fourier Transforms. Restrictions are: a and v must both be real-valued, and len(a)>len(v).

Requirements :Numerical analysis routines, numpy
tools.findRectangles(a, minsepy=None, minsepx=None, edgepad=10)[source]

Find corner coordinates of approximate rectangle shapes in an array.

Inputs :

a : 2D Numpy array

minsep : scalar

Minimum separation from one upper or left-hand border to the next. (cf. find_peaks())

edgepad : int

Pad the array with this many zeros on all sides (to find rectangles abutting array edges)

tools.find_peaks(vec, sep=0, thresh=None)[source]

Find all large values in input vector that are separated by at least ‘wid’ pixels.

Inputs :

vec (sequence) – 1D vector

sep (scalar) – minimum separation of returned peaks

thresh (scalar) – ignore all peaks lower than this value.

Example :

import pylab as py import tools x = py.linspace(0, 10, 100) # Generate fake time index y = py.sin(6.28*x/10.) + py.sin(6.28*x/2.) # Generate fake data peakvals, peaklocs = tools.find_peaks(y, sep=10) # Find the peaks py.plot(x, y, ‘-‘, x[peaklocs], peakvals, ‘or’) # Plot them

Returns :

peakvals, peakindices

tools.flatten(x, maxdepth=100)[source]

flatten(sequence) -> list

Returns a single, flat list which contains all elements retrieved from the sequence and all recursively contained sub-sequences (iterables).

Optional INPUTS:
 
maxdepth – scalar

number of layers deep to dig. Seting to zero causes no flattening to occur.

Examples :
>>> [1, 2, [3,4], (5,6)]
[1, 2, [3, 4], (5, 6)]
>>> flatten([[[1,2,3], (42,None)], [4,5], [6], 7, MyVector(8,9,10)])
[1, 2, 3, 42, None, 4, 5, 6, 7, 8, 9, 10]
tools.getfigs()[source]

Return a list of all open matplotlib figures.

No inputs or options.

tools.getfilelist(path='.', includes=[], excludes=[])[source]

Return a list of filenames meeting certain criteria.

Inputs :

path – (str) path to directory to be scanned

includes – (list of strs) – all strings in this list must be

present in a filename to be returned

excludes – (list of strs) – any string in this list will

prevent a filename from being returned

tools.getparams(params, chisq=None, conf=[0.68300000000000005])[source]

Find confidence levels and optimal parameters.

If chisq is None:
Take the median of each parameter, and compute the upper and lower confidence limits using the parameter distributions and this model.
Else:
Take a set of fit parameters and associated chi-squared values. Take as the optimum model that model with the MEDIAN chi-squared value. Compute upper and lower confidence limits using the parameter distributions and this optimum model.
Inputs :
params : N-D numpy array

model parameters

chisq : 1-D numpy array

chi-squared values.

conf : sequence or float

confidence levels to report

Outputs :
See ALSO:

analysis.dumbconf()

Requirements :

numpy

tools.hist2d(x, y, bins=None)[source]

Compute 2-d histogram data for specified bins.

Input :

x

y

Optional INPUT:
bins: a two-tuple containing one of the following:

(nx,ny) – tuple, number of bins in each direction

(xlims, ylims) – tuple of sequences (x- and y-bin edges)

Output :
A 3-tuple consisting of:

xbins – the centers of the x bins

ybins – the centers of the y bins

hist – The 2D histogram array

See ALSO:

numpy.histogram2d()

Requirements :

numpy

tools.hparams(params, nbins=10, figsize=[15, 10], normed=False, newfig=True, labs=None, cumulative=False, minorticks=True, plotconf=None, plotmid=False)[source]

Take a set of parameters and histogram them. Assume that the larger of the array’s two dimensions is N (the number of instantiations) and the smaller is M (the number of parameters).

Options :

nbins sets the number of bins

normed sets the histogram normalization

if newfig is False, plot into the current figure.

labs is a list of string labels

cumulative plots normalized cumulative distribution function

minorticks displays minor tick marks

plotconf displays confidence levels at the desired (fractional)

threshold. E.g., plotconf=0.683 displays the one-sigma confidence limits.

plotmid plots nothing (if False), the median (if True), or the

specified values (if a sequence). Defaults to median if plotconf is True.

Requirements :

numpy, pylab

tools.imrot(*varargin)[source]

Rotate a SQUARE image by theta (in radians)

SYNTAX:res = imrot (img, theta)
Notes :

This is VERY SLOW and only uses nearest-neighbor interpolation.

Image must be square; size remains the same

tools.keylist(filelist, keys)[source]

Create an object based on FITS header keys extracted from a filelist.

Inputs :

filelist – sequence of strings representing filenames (for PyFITS)

keys – sequence of strings representing header keys

#Keys not found in a file will result in the string value

Requirements :pyfits, Spitzer/MIPS 24 micron Analysis Routines
tools.legc(leg, col='color')[source]

Color legend text to match linecolor.

Inputs :

‘leg’ is a legend object.

‘col’ sets the field of the leg.get_lines() objects to use to find

the color.

You may need to refresh the figure to see the changes.

tools.loadpickle(filename)[source]

Load a pickle from a given filename. If it can’t be loaded by pickle, return -1 – otherwise return the pickled object.

E.g.,
data = tools.loadpickle(filename)
tools.multifunc(params, func1, func2, npar1, npar2=None, args1=(), args2=())[source]

Multiply two functions together.

Example :
import numpy as np
multifunc([.785, .785], np.cos, np.sin, 1)  # returns cos(pi/4) * sin(pi/4) ~ 0.5
Example :
import pylab as py
import tools
x = 2*py.pi * py.linspace(-5, 5, 500)
y = tools.multifunc([x/8., x], np.sin, np.sin, 1).ravel()
py.plot(x, y)
tools.multifunc_general(params, funcs, npar=None, args=None)[source]

Multiply results of several functions together.

Inputs :
params : sequence

Concatenated sequence of parameters (first arguments) for each of the functions in ‘funcs’.

funcs : function or tuple

Single function, or a tuple of several functions to call.

npar : tuple

If more than one function is used, npar must be a tuple specifying the number of parameters passes to each function (as its first input). E.g., if npar = (2, 3) then the result will be: funcs[0](params[0:2]) * funcs[1](params[2:5])

args : tuple

If more than one function is used, args must be a tuple specifying the additional arguments to be passed to each function. E.g.: funcs[0](params[0:2], *args[0])

Example :
import numpy as np
multifunc_general([.785, .785], (np.cos, np.sin), (1, 1))  # returns cos(pi/4) * sin(pi/4) ~ 0.5
Example :
import pylab as py
import tools
x = 2*py.pi * py.linspace(-10, 10, 500)
y = tools.multifunc_general([x/8., x], (np.sin, np.sin), (1, 1)).ravel()
py.plot(x, y)
tools.nextfig()[source]

Return one greater than the largest-numbered figure currently open. If no figures are open, return unity.

No inputs or options.

tools.pcmodelxcorr(pcaout, data, model, npcs=6, nblock=1000, xl=50, modstr='model', titstr='')[source]

Plot cross-correlations between projection of principal components onto data and a model.

Inputs :
pcaout – output from pcsa.pca, but its only important component

is that pcaout[2] is an array of PC eigenvectors; the strongest vector would be pcaout[2][:,-1], etc.

data – numpy array. Should be shape N x M – N observations of M

variables, arranged in L blocks, and should have been mean-subtracted prior to PCA (i.e., data -= data.mean(0))

model – numpy array. Should be shape M.

npcs – int. Number of principal components to cross-correlate

with.

nblock – int. NUmber of channels to use at a time in

correlations. Must be an integral divisor of data.shape[1]

xl – int. +/- X-limits to display in plots.

Requirements :

pylab, numpy, Phase curve spectroscopic analysis, and associated subroutines.

Usage is pretty specific to echelle-data

tools.planet2temp(rprs, fpfs, teff, lam=24.0, gamma=0.80000000000000004, ntrials=10000)[source]

Convert planet/star size and flux contrasts to brightness temperatures.

Inputs :
rprs : 2-tuple, either
[0] : (value, dist) where dist is the sampled posterior

distribution of planet/star radius ratio (e.g., from MCMC output). OR:

[1] : (value, uncertainty)

fpfs : 2-tuple, either
[0] : (value, dist) where dist is the sampled posterior

distribution of planet/star flux ratio (e.g., from MCMC output). OR:

[1] : (value, uncertainty)

teff : 2-tuple, either
[0] : (value, dist) where dist is the sampled posterior

distribution of stellar effective temperatures (e.g., from MCMC output). OR:

[1] : (value, uncertainty)

lam : float

wavelength of observations, in microns

gamma : float

factor to account for the fact that a star’s infrared flux is lower than predicted by a blackbody of a given effective temperature. Set to 0.8 for 24 microns.

ntrials : int

Number of times to resample distributions.

REQUIREMENTS:scipy.optimize, numpy
tools.plotc(x, y, z, **kw)[source]

Plot x,y data with an evolving z component by changing its color using a matplotlib colormap ‘cm’ object.

Will bomb if z elements are non-finite.

Options :
map : str

colormap to use for z

zmin, zmax : floats

maximum/minimum values of z for colorscale

others : various

Any options passable to matplotlib’s plot()

See ALSO:

matplotlib.colormaps()

Requirements :

pylab

tools.plotcorrs(params, labs=None, tit=None, xrot=0, yrot=0, cmap=None, figsize=None, plotregion=[0.10000000000000001, 0.10000000000000001, 0.80000000000000004, 0.80000000000000004], n=6, nbins=None, clim=None)[source]

Plot correlation coefficient matrix in one big, huge, honkin’ figure. Color-code the figure based on the correlation coefficients between parameters.

Inputs :

params – (M x N array) M instantiations of a set of N parameters.

Options :

labs – (list) labels for each of the N parameters

tit – (str) title for figure

xrot/yrot – (float) axis label rotation, in degrees

cmap – (matplotlib cm) – colormap for color-coding.

figsize – (2-list) – width and height of figure

plotregion – (4-list) – (left, bottom, width, height) of plotted region in each figure

n – (int) – number of subplots across each figure

nbins : int

Bin the data into this many bins, and show 2D histograms instead of points.

clim : None

Colorscale limits for normalized 2D histograms (where hist.sum() = 1.0)

Requirements :

pylab, NIRSPEC Data Analysis

Notes :

Based on the concept by Nymyer and Harrington at U. Central Florida

Beware of plotting two many points, and clogging your system!

tools.ploth(*args, **kw)[source]

Plot 1D data in a histogram-like format. If x-coordinates are specified, they refer to the centers of the histogram bars.

Uses same format as matplotlib.pyplot.plot. For example:
ploth(x, y)         # plot x and y using solid linestyle (default)
ploth(x, y, 'bo')   # plot x and y using blue circle markers w/no line
ploth(y)            # plot y using x as index array 0..N-1
ploth(y, 'r*--')    # ditto, but with red star corners and dashed line
Options :
rot90 : bool

If True, data will be plotted histogram-style vertically, rather than the standard horizontal plotting.

Requirements :

numpy, Numerical analysis routines

tools.plotlikelihood_2d(L, x=None, y=None, conf=[0.68300000000000005], figsize=[8, 6], contourargs={'colors': 'k'}, posteriorargs={'color': 'k'}, limitargs={'color': 'k', 'linestyle': ':'}, xlabel=None, ylabel=None, buffers=[0.10000000000000001, 0.10000000000000001, 0.10000000000000001, 0.10000000000000001])[source]

Plot contours and histograms for 2D likelihood array.

Inputs :

L : 2d Numpy array

Likelihood values, not necessarily normalized. (Remember that L = exp[-chi^2 / 2.] )

x : sequence

Values along the first dimension of L. Thus len(x) must equal L.size[0]

y : sequence

Values along the second dimension of L. Thus len(x) must equal L.size[1]

figsize : 2-sequence

Size of figure to be created, in inches.

conf : scalar or sequence

Confidence intervals to plot

contourargs : dict

Keyword arguments to be passed to matplotlib.contour

posteriorargs : dict

Keyword arguments to be passed to matplotlib.plot for posterior distributions

limitargs : dict

Keyword arguments to be passed to matplotlib.plot for 1D confidence limits

xlabel : str

ylabel : str

buffers : 4-sequence

fractional buffer width around edges: [left, right, bottom, top]

tools.plotstyle(i, c=['b', 'g', 'r', 'c', 'm', 'y', 'k'], s=['.', 'x', 's', '^', '*', 'o', '+', 'v', 'p', 'D'], l=['-', '--', '-.', ':'])[source]

Return plot properties to help distinguish many types of plot symbols.

Input :

i – int.

Optional INPUT:

c – color, or list of colors accepted by pylab.plot

s – symbol, or list of symbols accepted by pylab.plot

l – linestyle, or list of linestyles accepted by pylab.plot

Output :

tuple of (color, symbol, linestyle)

Requirements :

numpy

tools.popall(seq, obj)[source]

Remove all instances of ‘obj’ from list ‘seq’

Input :

seq – (list) list from which to pop elements

obj – target object to remove

Example :
import tools
b = [3, 'spam', range(5)]
tools.popall(b, 4)
print b
Notes :

– Will fail if ‘obj’ is itself a list.

– Edits list in-place, so make a copy first if you want to

retain the old version of your list.

– Has not been tested for extremely deep lists

See ALSO:

replaceall()

tools.pparams(params, npts=None, figsize=[15, 10], newfig=True, labs=None)[source]

Take a set of parameters and plot them. Assume that the larger of the array’s two dimensions is N (the number of instantiations) and the smaller is M (the number of parameters).

If npts is not None, then pick only every (N/npts)th point. if newfig is False, plot into the current figure.

Requirements :numpy, pylab
tools.printfigs(filename, figs=None, format=None, pdfmode='texexec', verbose=False)[source]

Print desired figures using designated ‘format’.

Inputs :

filename – string. prepended to all open figures

figs – int or list.

figures to access, then apply savefig to. If None, print

all open figures; if -1, print current figure.

format – string or list of strings.

if ‘pdf’, all images are concatenated into one file (use

“pdfs” for individual pdf figure files)

pdfmode – string;

method of concatenating PDFs. Either ‘texexec’ or ‘gs’ for

GhostScript

Notes :

If no explicit path is passed and a subdirectory ‘figures’ exists in the current directory, the figures will be printed in ‘figures’ instead.

Example :
from pylab import *
figure(1); plot(arange(10), randn(10), 'ob')
figure(2); plot(arange(15), randn(15), '-xr')
printfigs('testing')
!open testing.pdf
tools.readPSplotdata(filename, spacedelimiter=' ', pairdelimiter=', ')[source]

Read in a raw PostScript plot datafile and output Numpy arrays.

Inputs :
filename : str

filename of the data file.

spacedelimiter : str

delimiter between data pairs; by default a space (” ”)

pairdelimiter : str

delimiter within data pairs; by default a comma (”,”)

File_format :
The datafile should have the format:

“m x0,y0 dx1,dy1 dx2,dy2 ...”

Outputs :

(x, y) – tuple of 1D arrays

tools.replaceall(seq, obj, rep)[source]

Replace all instances of ‘obj’ with ‘rep’ in list ‘seq’

Input :

seq – (list) list within which to find-and-replace elements

obj – target object to replace

rep – replacement object

Example :
import tools
b = [2, ['spam', ['eggs', 5, dict(spam=3)]]]
tools.replaceall(b, 'spam', 'bacon')
print b
Notes :

– Will fail if ‘obj’ is itself a list.

– Edits list in-place, so make a copy first if you want to

retain the old version of your list.

– Has not been tested for extremely deep lists

See ALSO:

popall()

tools.resamp(frame, resamp, retcoords=False)[source]

Resample a 2D array by a given factor, using bilinear interpolation.

Inputs :
frame : 2D Numpy Array

data array to be resampled

resamp : positive scalar

resampling factor (typically an integer greater than 1)

retcoords: bool

If True, return tuple (frame2, x2, y2)

NOTES:Result needs to be scaled by (1/resamp^2) to conserve flux
tools.roundparams(value, error, seconderror=None, extraplaces=1)[source]

Round a value and its associated error(s) to 2 decimal places.

Inputs :

value, error : scalars

Outputs :

value, error

Example :
import tools
tools.roundparams(1.23456, .09876)
tools.sample_1dcdf(pdf, x, nsamp=1, verbose=False, absnorm=False)[source]

Sample a 1D Posterior Distribution Function (1d-histogram)

Inputs :
PDF : 1D NumPy array

Distribution function (histogram) from which you wish to draw samples. This need not be in any particular normalized form – the only condition is that the value in each cell is proportional to the probability of that cell.

x : 1D NumPy array

N Values indexing the cells of the CDF (one per row)

nsamp : int

Number of samples to be drawn.

absnorm : bool

If True, normalize pdf so that it integrates to 1.0

tools.sample_2dcdf(cdf, x, y, nsamp=1, verbose=False)[source]

Sample a 2D Cumulative Distribution Function (2d-histogram)

Inputs :
CDF : 2D NumPy array

Two-dimensional (N x M) cumulative distribution function (histogram) from which you wish to draw samples. This need not be in any particular normalized form – the only condition is that the value in each cell is proportional to the probability of that cell.

x : 1D NumPy array

N Values indexing the cells of the CDF (one per row)

y : 1D NumPy array

M Values indexing the cells of the CDF (one per column)

nsamp : int

Number of samples to be drawn.

Notes :

Note that this approach uses simple, dumb nearest-neighbor interpolation when drawing candidate samples. Exceedingly granular CDFs may suffer.

Note that this code isn’t optimized; try it with small samples before you use it on large ones!

tools.savepickle(obj, filename)[source]

Save a pickle to a given filename. If it can’t be saved by pickle, return -1 – otherwise return the file object.

To save multiple objects in one file, use (e.g.) a dict:

tools.savepickle(dict(a=[1,2], b=’eggs’), filename)
tools.scrapeints(string)[source]

Extract a series of integers from a string. Slow but robust.

readints(‘[124, 56|abcdsfad4589.2]’)

will return:

[124, 56, 4589, 2]

tools.shift_image(im, dx=0, dy=0, buffer=0)[source]

Shift a 2D image by the specified number of integer pixels.

Inputs :
im : 2D Numpy array

numpy array of desired size

dx : int

number of pixels to shift in x direction

dy : int

number of pixels to shift in y direction

buffer : scalar

Default value for unfilled pixels

tools.sumfunc(params, func1, func2, npar1, npar2=None, args1=(), args2=())[source]

Add two functions together.

Example :
import numpy as np
sumfunc([.785, .785], np.cos, np.sin, 1)  # returns cos(pi/4) + sin(pi/4) ~ XXX
Example :
import pylab as py
import tools
x = 2*py.pi * py.linspace(-5, 5, 500)
y = tools.sumfunc([x/8., x], np.sin, np.sin, 1).ravel()
py.plot(x, y)

Previous topic

Numerical analysis routines

Next topic

Transit light curve routines

This Page