pisa.analysis package
Submodules
pisa.analysis.analysis module
Common tools for performing an analysis collected into a single class Analysis that can be subclassed by specific analyses.
- class pisa.analysis.analysis.Analysis[source]
Bases:
BasicAnalysis
Analysis class for “canonical” IceCube/DeepCore/PINGU analyses.
“Data” distribution creation (via passed data_maker object)
“Expected” distribution creation (via passed distribution_maker object)
- Minimizer Interface (via method _minimizer_callable)
Interfaces to a minimizer for modifying the free parameters of the distribution_maker to fit its output (as closely as possible) to the data distribution is provided. See [minimizer_settings] for
- fit_hypo(data_dist, hypo_maker, metric, minimizer_settings, hypo_param_selections=None, reset_free=True, check_octant=True, fit_octants_separately=None, check_ordering=False, other_metrics=None, blind=False, pprint=True, external_priors_penalty=None)[source]
Fitter “outer” loop: If check_octant is True, run fit_hypo_inner starting in each octant of theta23 (assuming that is a param in the hypo_maker). Otherwise, just run the inner method once.
Note that prior to running the fit, the hypo_maker has hypo_param_selections applied and its free parameters are reset to their nominal values.
- Parameters:
data_dist (MapSet or List of MapSets) – Data distribution(s). These are what the hypothesis is tasked to best describe during the optimization process.
hypo_maker (Detectors, DistributionMaker or instantiable thereto) – Generates the expectation distribution under a particular hypothesis. This typically has (but is not required to have) some free parameters which can be modified by the minimizer to optimize the metric.
hypo_param_selections (None, string, or sequence of strings) – A pipeline configuration can have param selectors that allow switching a parameter among two or more values by specifying the corresponding param selector(s) here. This also allows for a single instance of a DistributionMaker to generate distributions from different hypotheses.
metric (string or iterable of strings) – The metric to use for optimization. Valid metrics are found in VALID_METRICS. Note that the optimized hypothesis also has this metric evaluated and reported for each of its output maps.
minimizer_settings (string or dict)
check_octant (bool) – If theta23 is a parameter to be used in the optimization (i.e., free), the fit will be re-run in the second (first) octant if theta23 is initialized in the first (second) octant.
reset_free (bool) – Resets all free parameters to values defined in stages when starting a fit
fit_octants_separately (bool) – If ‘check_octant’ is set so that the two octants of theta23 are individually checked, this flag enforces that each theta23 can only vary within the octant currently being checked (e.g. the minimizer cannot swap octants). Deprecated.
check_ordering (bool) – If the ordering is not in the hypotheses already being tested, the fit will be run in both orderings.
other_metrics (None, string, or list of strings) – After finding the best fit, these other metrics will be evaluated for each output that contributes to the overall fit. All strings must be valid metrics, as per VALID_METRICS, or the special string ‘all’ can be specified to evaluate all VALID_METRICS..
pprint (bool) – Whether to show live-update of minimizer progress.
blind (bool or int) – Whether to carry out a blind analysis. If True or 1, this hides actual parameter values from display and disallows these (as well as Jacobian, Hessian, etc.) from ending up in logfiles. If given an integer > 1, the fitted parameters are also prevented from being stored in fit info dictionaries.
external_priors_penalty (func) – User defined prior penalty function. Adds an extra penalty to the metric that is minimized, depending on the input function.
- Returns:
best_fit_info (HypoFitResult (see fit_hypo_inner method for details of) – fit_info dict)
alternate_fits (list of fit_info from other fits run)
- nofit_hypo(data_dist, hypo_maker, hypo_param_selections, hypo_asimov_dist, metric, other_metrics=None, blind=False, external_priors_penalty=None)[source]
Fitting a hypo to a distribution generated by its own distribution maker is unnecessary. In such a case, use this method (instead of fit_hypo) to still retrieve meaningful information for e.g. the match metrics.
- Parameters:
data_dist (MapSet or List of MapSets)
hypo_maker (Detectors or DistributionMaker)
hypo_param_selections (None, string, or sequence of strings)
hypo_asimov_dist (MapSet or List of MapSets)
metric (string or iterable of strings)
other_metrics (None, string, or sequence of strings)
blind (bool)
external_priors_penalty (func)
- scan(data_dist, hypo_maker, metric, hypo_param_selections=None, param_names=None, steps=None, values=None, only_points=None, outer=True, profile=True, minimizer_settings=None, outfile=None, debug_mode=1, **kwargs)[source]
Set hypo maker parameters named by param_names according to either values specified by values or number of steps specified by steps, and return the metric indicating how well the data distribution is described by each distribution.
Some flexibility in how the user can specify values is allowed, based upon the shapes of param_names and values and how the outer flag is set.
Either values or steps must be specified, but not both.
- Parameters:
data_dist (Sequence of MapSets or MapSet) – Data distribution(s). These are what the hypothesis is tasked to best describe during the optimization/comparison process.
hypo_maker (Detectors, DistributionMaker or instantiable thereto) – Generates the expectation distribution under a particular hypothesis. This typically has (but is not required to have) some free parameters which will be modified by the minimizer to optimize the metric in case profile is set to True.
hypo_param_selections (None, string, or sequence of strings) – A pipeline configuration can have param selectors that allow switching a parameter among two or more values by specifying the corresponding param selector(s) here. This also allows for a single instance of a DistributionMaker to generate distributions from different hypotheses.
metric (string or iterable of strings) – The metric to use for optimization/comparison. Note that the optimized hypothesis also has this metric evaluated and reported for each of its output maps. Confer pisa.core.map for valid metrics.
param_names (None, string, or sequence of strings) – If None, assume all parameters are to be scanned; otherwise, specifies only the name or names of parameters to be scanned.
steps (None, integer, or sequence of integers) –
Number of steps to take within the allowed range of the parameter (or parameters). Value(s) specified for steps must be >= 2. Note that the endpoints of the range are always included, and numbers of steps higher than 2 fill in between the endpoints.
- If integer…
Take this many steps for each specified parameter.
- If sequence of integers…
Take the coresponding number of steps within the allowed range for each specified parameter.
values (None, scalar, sequence of scalars, or sequence-of-sequences) –
- If scalar…
Set this value for the (one) param name in param_names.
- If sequence of scalars…
if len(param_names) is 1, set its value to each number in the sequence.
otherwise, set each param in param_names to the corresponding value in values. There must be the same number of param names as values.
- If sequence of (sequences or iterables)…
Each param name corresponds to one of the inner sequences, in the order that the param names are specified.
If outer is False, all inner sequences must have the same length, and there will be one distribution generated for each set of values across the inner sequences. In other words, there will be a total of len(inner sequence) distribution generated.
If outer is True, the lengths of inner sequences needn’t be the same. This takes the outer product of the passed sequences to arrive at the permutations of the parameter values that will be used to produce the distributions (essentially nested loops over each parameter). E.g., if two params are scanned, for each value of the first param’s inner sequence, a distribution is produced for every value of the second param’s inner sequence. In total, there will be
len(inner seq0) * len(inner seq1) * ...
distributions produced.
only_points (None, integer, or even-length sequence of integers) – Only select subset of points to be analysed by specifying their range of positions within the whole set (0-indexed, incremental). For the lazy amongst us…
outer (bool) – If set to True and a sequence of sequences is passed for values, the points scanned are the outer product of the inner sequences. See values for a more detailed explanation.
profile (bool) – If set to True, minimizes specified metric over all free parameters at each scanned point. Otherwise keeps them at their nominal values and only performs grid scan of the parameters specified in param_names.
minimizer_settings (dict) – Dictionary containing the settings for minimization, which are only needed if profile is set to True. Hint: it has proven useful to sprinkle with a healthy dose of scepticism.
outfile (string) – Outfile to store results to. Will be updated at each scan step to write out intermediate results to prevent loss of data in case the apocalypse strikes after all.
debug_mode (int, either one of [0, 1, 2]) – If set to 2, will add a wealth of minimisation history and physics information to the output file. Otherwise, the output will contain the essentials to perform an analysis (0), or will hopefully be detailed enough for some simple debugging (1). Any other value for debug_mode will be set to 2.
- class pisa.analysis.analysis.BasicAnalysis[source]
Bases:
object
A bare-bones analysis that only fits a hypothesis to data.
Full analyses with functionality beyond just fitting (doing scans, for example) should sub-class this class.
Every fit is run with the fit_recursively method, where the fit strategy is defined by the three arguments method, method_kwargs and local_fit_kwargs (see documentation of
fit_recursively()
below for other arguments.) The method argument determines which sub-routine should be run, method_kwargs is a dictionary with any keyword arguments of that sub-routine, and local_fit_kwargs is a dictionary (or list thereof) defining any nested sub-routines that are run within the outer sub-routine. A sub-sub-routine defined in local_fit_kwargs should again be a dictionary with the three keywords method, method_kwargs and local_fit_kwargs. In this way, sub-routines can be arbitrarily stacked to define complex fit strategies.Examples
A canonical standard oscillation fit fits octants in theta23 separately and then runs a scipy minimizer to optimize locally in each octant. The arguments that would produce that result when passed to fit_recursively are:
method = "octants" method_kwargs = { "angle": "theta23" "inflection_point": 45 * ureg.deg } local_fit_kwargs = { "method": "scipy", "method_kwargs": minimizer_settings, "local_fit_kwargs": None }
Let’s say we also have a CP violating phase deltacp24 that we want to fit separately per quadrant split at 90 degrees. We want this done within each quadrant fit for theta23, making 4 fits in total. Then we would nest the quadrant fit for deltacp24 inside the octant fit like so:
method = "octants" method_kwargs = { "angle": "theta23" "inflection_point": 45 * ureg.deg } local_fit_kwargs = { "method": "octants", "method_kwargs": { "angle": "deltacp24", "inflection_point": 90 * ureg.deg, } "local_fit_kwargs": { "method": "scipy", "method_kwargs": minimizer_settings, "local_fit_kwargs": None } }
Let’s suppose we want to apply a grid-scan global fit method to sterile mixing parameters theta24 and deltam41, but we want to marginalize over all other parameters with a usual 3-flavor fit configuration. That could be achieved as follows:
method = "grid_scan" method_kwargs = { "grid": { "theta24": np.geomspace(1, 20, 3) * ureg.deg, "deltam41": np.geomspace(0.01, 0.5, 4) * ureg["eV^2"], }, "fix_grid_params": False, } local_fit_kwargs = { "method": "octants", "method_kwargs": { "angle": "theta23", "inflection_point": 45 * ureg.deg, } "local_fit_kwargs": { "method": "scipy", "method_kwargs": minimizer_settings, "local_fit_kwargs": None } }
Instead of scipy, we can also use iminuit and nlopt for local minimization or global searches by writing a dictionary with
"method": "iminuit"
or"method": "nlopt"
, respectively.NLOPT Options
NLOPT can be dropped in place of scipy and iminuit by writing a dictionary with
"method": "nlopt"
and choosing the algorithm by its name of the formNLOPT_{G,L}{N}_XXXX
. PISA supports all of the derivative-free global (https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#global-optimization) and local (https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#local-derivative-free-optimization) algorithms. Algorithms requiring gradients such as BFGS are not supported. To use the Nelder-Mead algorithm, for example, the following settings could be used:nlopt_settings = { "method": "nlopt", "method_kwargs": { "algorithm": "NLOPT_LN_NELDERMEAD", "ftol_abs": 1e-5, "ftol_rel": 1e-5, # other options that can be set here: # xtol_abs, xtol_rel, stopval, maxeval, maxtime # after maxtime seconds, stop and return best result so far "maxtime": 60 }, "local_fit_kwargs": None # no further nesting available }
and then run the fit with
best_fit_info = ana.fit_recursively( data_dist, dm, "chi2", None, **nlopt_settings )
. Of course, you can also nest the nlopt_settings dictionary in any of the octants, ranges and so on by passing it as local_fit_kwargs.
Adding constraints
Adding inequality constraints to algorithms that support it is possible by writing a lambda function in a string that expects to get the current parameters as a ParamSet and returns a float. The result will satisfy that the passed function stays negative (to be consistent with scipy). The string will be passed to eval to build the callable function. For example, a silly way to bound delta_index > 0.1 would be:
"method_kwargs": { "algorithm": "NLOPT_LN_COBYLA", "ftol_abs": 1e-5, "ftol_rel": 1e-5, "maxtime": 30, "ineq_constraints": [ # be sure to convert parameters to their magnitude "lambda params: params.delta_index.m - 0.1" ] }
Adding inequality constraints to algorithms that don’t support it can be done by either nesting the local fit in the constrained_fit method or to use NLOPT’s AUGLAG method that adds a penalty for constraint violations internally. For example, we could do this to fulfill the same constraint with the PRAXIS algorithm:
"method_kwargs": { "algorithm": "NLOPT_AUGLAG", "ineq_constraints":[ "lambda params: params.delta_index.m - 0.1" ], "local_optimizer": { # supports all the same options as above "algorithm": "NLOPT_LN_PRAXIS", "ftol_abs": 1e-5, "ftol_rel": 1e-5, } }
Using global searches with local subsidiary minimizers
Some global searches, like evolutionary strategies, use local subsidiary minimizers. These can be defined just as above by passing a dictionary with the settings to the local_optimizer keyword. Note that, again, only gradient-free methods are supported. Here is an example for the “Multi-Level single linkage” (MLSL) algorithm, using PRAXIS as the local optimizer:
"method_kwargs": { "algorithm": "NLOPT_G_MLSL_LDS", "local_optimizer": { "algorithm": "NLOPT_LN_PRAXIS", "ftol_abs": 1e-5, "ftol_rel": 1e-5, } }
For some evolutionary strategies such as ISRES, the population option can also be set.
"method_kwargs": { "algorithm": "NLOPT_GN_ISRES", "population": 100, }
Custom fitting methods
Custom fitting methods are added by subclassing the analysis. The fit function name has to follow the scheme _fit_{method} where method is the name of the fit method. For instance, the function for scipy is called _fit_scipy and can be called by setting “method”: “scipy” in the fit strategy dict.
The function has to accept the parameters data_dist, hypo_maker, metric, external_priors_penalty, method_kwargs, and local_fit_kwargs. See docstring of fit_recursively for descriptions of these arguments. The return value of the function must be a HypoFitResult object. As an example, the following sub-class of the BasicAnalysis has a custom fit method that, nonsensically, always sets 42 degrees as the starting value for theta23:
class SubclassedAnalysis(BasicAnalysis): def _fit_nonsense( self, data_dist, hypo_maker, metric, external_priors_penalty, method_kwargs, local_fit_kwargs ): logging.info("Starting nonsense fit (setting theta23 to 42 deg)...") for pipeline in hypo_maker: if "theta23" in pipeline.params.free.names: pipeline.params.theta23.value = 42 * ureg["deg"] best_fit_info = self.fit_recursively( data_dist, hypo_maker, metric, external_priors_penalty, local_fit_kwargs["method"], local_fit_kwargs["method_kwargs"], local_fit_kwargs["local_fit_kwargs"] ) return best_fit_info
Now, the nonsense fit method can be used and nested with any other fit method like so:
ana = SubclassedAnalysis() local_minuit = OrderedDict( method="iminuit", method_kwargs={ "tol": 10, }, local_fit_kwargs=None ) local_nonsense_minuit = OrderedDict( method="nonsense", method_kwargs=None, local_fit_kwargs=local_minuit ) fit_result = ana.fit_recursively( data_dist, distribution_maker, "chi2", None, **local_nonsense_minuit )
- MCMC_sampling(data_dist, hypo_maker, metric, nwalkers, burnin, nsteps, return_burn_in=False, random_state=None, sampling_algorithm=None)[source]
Performs MCMC sampling. Only supports serial (single CPU) execution at the moment. See issue #830.
- Parameters:
data_dist (Sequence of MapSets or MapSet) – Data distribution to be fit. Can be an actual-, Asimov-, or pseudo-data distribution (where the latter two are derived from simulation and so aren’t technically “data”).
hypo_maker (Detectors or DistributionMaker) – Creates the per-bin expectation values per map based on its param values. Free params in the hypo_maker are modified by the minimizer to achieve a “best” fit.
metric (string or iterable of strings) – Metric by which to evaluate the fit. See documentation of Map.
nwalkers (int) – Number of walkers
burnin (int) – Number of steps in burn in phase
nSteps (int) – Number of steps after burn in
return_burn_in (bool) – Also return the steps of the burn in phase. Default is False.
random_state (None or type accepted by utils.random_numbers.get_random_state) – Random state of the walker starting points. Default is None.
sampling_algorithm (None or emcee.moves object) – Sampling algorithm used by the emcee sampler. None means to use the default which is a Goodman & Weare “stretch move” with parallelization. See https://emcee.readthedocs.io/en/stable/user/moves/#moves-user to learn more about the emcee sampling algorithms.
- Returns:
scaled_chain (numpy array) – Array containing all points in the parameter space visited by each walker. It is sorted by steps, so all the first steps of all walkers come first. To for example get all values of the Nth parameter and the ith walker, use scaled_chain[i::nwalkers, N].
scaled_chain_burnin (numpy array (optional)) – Same as scaled_chain, but for the burn in phase.
- fit_recursively(data_dist, hypo_maker, metric, external_priors_penalty, method, method_kwargs=None, local_fit_kwargs=None)[source]
Recursively apply global search strategies with local sub-fits.
- Parameters:
data_dist (Sequence of MapSets or MapSet) – Data distribution to be fit. Can be an actual-, Asimov-, or pseudo-data distribution (where the latter two are derived from simulation and so aren’t technically “data”).
hypo_maker (Detectors or DistributionMaker) – Creates the per-bin expectation values per map based on its param values. Free params in the hypo_maker are modified by the minimizer to achieve a “best” fit.
metric (string or iterable of strings) – Metric by which to evaluate the fit. See documentation of Map.
external_priors_penalty (func) – User defined prior penalty function, which takes hypo_maker and metric as arguments and returns numerical value of penalty to the metric value. It is expected sign of the penalty is correctly specified inside the external_priors_penalty (e.g. negative for llh or positive for chi2).
method (str) – Name of the sub-routine to be run. Currently, the options are scipy, octants, best_of, grid_scan, constrained, ranges, condition, iminuit, and nlopt.
method_kwargs (dict) – Any keyword arguments taken by the sub-routine. May be None if the sub-routine takes no additional arguments.
local_fit_kwargs (dict or list thereof) – A dictionary defining subsidiary sub-routines with the keywords method, method_kwargs and local_fit_kwargs. May be None if the sub-routine is itself a local or global fit that runs no further subsidiary fits.
- class pisa.analysis.analysis.Counter(i=0)[source]
Bases:
object
Simple counter object for use as a minimizer callback.
- property count
Current count
- Type:
int
- pisa.analysis.analysis.MINIMIZERS_USING_CONSTRAINTS = 'cobyla'
Minimizers that cannot use the ‘bounds’ argument and instead need bounds to be formulated in terms of constraints.
- pisa.analysis.analysis.MINIMIZERS_USING_SYMM_GRAD = ('l-bfgs-b', 'slsqp')
Minimizers that use symmetrical steps on either side of a point to compute gradients. See https://github.com/scipy/scipy/issues/4916