brainiak.hyperparamopt package

Hyper parameter optimization package


brainiak.hyperparamopt.hpo module

Hyper Parameter Optimization (HPO)

This implementation is based on the work in [Bergstra2011] and [Bergstra2013].

[Bergstra2011]“Algorithms for Hyper-Parameter Optimization”, James S. Bergstra and Bardenet, Rémi and Bengio, Yoshua and Kégl, Balázs. NIPS 2011
[Bergstra2013](1, 2) “Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures”, James Bergstra, Daniel Yamins, David Cox. JMLR W&CP 28 (1) : 115–123, 2013
brainiak.hyperparamopt.hpo.fmin(loss_fn, space, max_evals, trials, init_random_evals=30, explore_prob=0.2)

Find the minimum of function through hyper parameter optimization.

  • loss_fn (function(*args) -> float) – Function that takes in a dictionary and returns a real value. This is the function to be minimized.
  • space (dictionary) – Custom dictionary specifying the range and distribution of the hyperparamters. E.g. space = {'x': {'dist':scipy.stats.uniform(0,1), 'lo':0, 'hi':1}} for a 1-dimensional space with variable x in range [0,1]
  • max_evals (int) – Maximum number of evaluations of loss_fn allowed
  • trials (list) – Holds the output of the optimization trials. Need not be empty to begin with, new trials are appended at the end.
  • init_random_evals (Optional[int], default 30) – Number of random trials to initialize the optimization.
  • explore_prob (Optional[float], default 0.2) – Controls the exploration-vs-exploitation ratio. Value should be in [0,1]. By default, 20% of trails are random samples.

Best hyperparameter setting found. E.g. {‘x’: 5.6, ‘loss’ : 0.5} where x is the best hyparameter value found and loss is the value of the function for the best hyperparameter value(s).

Return type:

trial entry (dictionary of hyperparameters)


ValueError – If the distribution specified in space does not support a rvs() method to generate random numbers, a ValueError is raised.

brainiak.hyperparamopt.hpo.get_next_sample(x, y, min_limit=-inf, max_limit=inf)

Get the next point to try, given the previous samples.

We use [Bergstra2013] to compute the point that gives the largest Expected improvement (EI) in the optimization function. This model fits 2 different GMMs - one for points that have loss values in the bottom 15% and another for the rest. Then we sample from the former distribution and estimate EI as the ratio of the likelihoods of the 2 distributions. We pick the point with the best EI among the samples that is also not very close to a point we have sampled earlier.

  • x (1D array) – Samples generated from the distribution so far
  • y (1D array) – Loss values at the corresponding samples
  • min_limit (float, default : -inf) – Minimum limit for the distribution
  • max_limit (float, default : +inf) – Maximum limit for the distribution

Next value to use for HPO

Return type:


brainiak.hyperparamopt.hpo.get_sigma(x, min_limit=-inf, max_limit=inf)

Compute the standard deviations around the points for a 1D GMM.

We take the distance from the nearest left and right neighbors for each point, then use the max as the estimate of standard deviation for the gaussian mixture around that point.

  • x (1D array) – Set of points to create the GMM
  • min_limit (Optional[float], default : -inf) – Minimum limit for the distribution
  • max_limit (Optional[float], default : inf) – maximum limit for the distribution

Array of standard deviations

Return type:

1D array

class brainiak.hyperparamopt.hpo.gmm_1d_distribution(x, min_limit=-inf, max_limit=inf, weights=1.0)

Bases: object

GMM 1D distribution.

Given a set of points, we create this object so that we can calculate likelihoods and generate samples from this 1D Gaussian mixture model.


1D array – Set of points to create the GMM


int – Number of points to create the GMM


Optional[float], default : -inf – Minimum limit for the distribution


Optional[float], default : inf – Maximum limit for the distribution


Optional[1D array], default : array of ones – Used to weight the points non-uniformly if required


Return the GMM likelihood for given point(s).

See (1).

Parameters:x (scalar (or) 1D array of reals) – Point(s) at which likelihood needs to be computed
Returns:Likelihood values at the given point(s)
Return type:scalar (or) 1D array

Calculate the GMM likelihood for a single point.

(1)\[y = \sum_{i=1}^{N} w_i \times \text{normpdf}(x, x_i, \sigma_i)/\sum_{i=1}^{N} w_i\]
Parameters:x (float) – Point at which likelihood needs to be computed
Returns:Likelihood value at x
Return type:float

Sample the GMM distribution.

Parameters:n (int) – Number of samples needed
Returns:Samples from the distribution
Return type:1D array