Model#

class Model(system, metrics=None, specification=None, parameters=None, retry_evaluation=True, exception_hook=None)[source]#

Create a Model object that allows for evaluation over a sample space.

Parameters:
  • system (System) – Should reflect the model state.

  • metrics (tuple[Metric]) – Metrics to be evaluated by model.

  • specification=None (Function, optional) – Loads specifications once all parameters are set. Specification should simulate the system as well.

  • params=None (Iterable[Parameter], optional) – Parameters to sample from.

  • exception_hook (callable(exception, sample)) – Function called after a failed evaluation. The exception hook should return either None or metric values given the exception and sample.

load_default_parameters(feedstock, shape=<function triang>, bounded_shape=<function bounded_triang>, operating_days=False, include_feedstock_price=True)#

Load all default parameters, including coefficients of cost items, stream prices, electricity price, heat utility prices, feedstock flow rate, and number of operating days.

Parameters:
  • feedstock (Stream) – Main feedstock of process.

  • shape (function, optional) – Should take in baseline value and return a chaospy.Dist object or None. Default function returns a chaospy.Triangle object with bounds at +- 10% of baseline value. The distribution is applied to all parameters except exponential factor “n” of cost items.

  • bounded_shape (function, optional) – Should take in baseline value and return a chaospy.Dist object or None. Default function returns a chaospy.Triangle object with bounds at +- 0.1 of baseline value or minimum 0 and maximum 1. The distribution is applied to exponential factor “n” of cost items.

  • operating_days (bool, optional) – If True, include operating days

get_baseline_sample(parameters=None, array=False)[source]#

Return a pandas Series object of parameter baseline values.

property optimized_parameters#

tuple[Parameter] All parameters optimized in the model.

property parameters#

tuple[Parameter] All parameters added to the model.

set_parameters(parameters)[source]#

Set parameters.

get_parameters()[source]#

Return parameters.

get_joint_distribution()[source]#

Return a chaospy joint distribution object constituting of all parameter objects.

get_distribution_summary(xlfile=None)[source]#

Return dictionary of shape name-DataFrame pairs.

parameter(setter=None, element=None, kind=None, name=None, distribution=None, units=None, baseline=None, bounds=None, hook=None, description=None, scale=None, optimized=False)[source]#

Define and register parameter.

Parameters:
  • setter (function) – Should set parameter in the element.

  • element (Unit or Stream) – Element in the system being altered.

  • kind ({'coupled', 'isolated', 'design', 'cost'}, optional) –

    • ‘coupled’: parameter is coupled to the system.

    • ’isolated’: parameter does not affect the system but does affect the element (if any).

    • ’design’: parameter only affects design and/or cost of the element.

  • name (str, optional) – Name of parameter. If None, default to argument name of setter.

  • distribution (chaospy.Dist) – Parameter distribution.

  • units (str, optional) – Parameter units of measure

  • baseline (float, optional) – Baseline value of parameter.

  • bounds (tuple[float, float], optional) – Lower and upper bounds of parameter.

  • hook (Callable, optional) – Should return the new parameter value given the sample.

  • scale (float, optional) – The sample is multiplied by the scale before setting.

Notes

If kind is ‘coupled’, account for downstream operations. Otherwise, only account for given element. If kind is ‘design’ or ‘cost’, element must be a Unit object.

problem()[source]#

Return a dictionary of parameter metadata (referred to as “problem”) to be used for sampling by SALib.

See also

SALib basics

sample(N, rule, **kwargs)[source]#

Return N samples from parameter distribution at given rule.

Parameters:
  • N (int) – Number of samples.

  • rule (str) – Sampling rule.

  • **kwargs – Keyword arguments passed to sampler.

Notes

This method relies on the chaospy library for sampling from distributions, and the``SALib`` library for sampling schemes specific to sensitivity analysis.

For sampling from a joint distribution of all parameters, use the following rule flags:

key

Description

C

Roots of the first order Chebyshev polynomials.

NC

Chebyshev nodes adjusted to ensure nested.

K

Korobov lattice.

R

Classical (Pseudo-)Random samples.

RG

Regular spaced grid.

NG

Nested regular spaced grid.

L

Latin hypercube samples.

S

Sobol low-discrepancy sequence.

H

Halton low-discrepancy sequence.

M

Hammersley low-discrepancy sequence.

If sampling for sensitivity analysis, use the following rule flags:

key

Description

MORRIS

Samples for Morris One-at-A-Time (OAT)

RBD

Chebyshev nodes adjusted to ensure nested.

FAST

Korobov lattice.

SOBOL

Classical (Pseudo-) Random samples.

Note that only distribution bounds (i.e. lower and upper bounds) are taken into account for sensitivity analysis, the type of distribution (e.g., triangle vs. uniform) do not affect the sampling.

property specification: Callable#

Process specification.

copy()[source]#

Return copy.

property exception_hook#

[callable(exception, sample)] Function called after a failed evaluation. The exception hook should return either None or metric values given the exception and sample.

property metrics#

tuple[Metric] Metrics to be evaluated by model.

metric(getter=None, name=None, units=None, element=None)[source]#

Define and register metric.

Parameters:
  • getter (function, optional) – Should return metric.

  • name (str, optional) – Name of metric. If None, defaults to the name of the getter.

  • units (str, optional) – Metric units of measure

  • element (object, optional) – Element being evaluated. Works mainly for bookkeeping. Defaults to ‘Biorefinery’.

Notes

This method works as a decorator.

load_samples(samples=None, sort=None, file=None, autoload=None, autosave=None, distance=None)[source]#

Load samples for evaluation.

Parameters:
  • samples (numpy.ndarray, dim=2, optional) – All parameter samples to evaluate.

  • sort (bool, optional) – Whether to internally sort the samples to optimize convergence speed by minimizing perturbations to the system between simulations. The optimization problem is equivalent to the travelling salesman problem; each scenario of (normalized) parameters represent a point in the path. Defaults to False.

  • file (str, optional) – File to load/save samples and simulation order to/from.

  • autosave (bool, optional) – Whether to save samples and simulation order to file (when not loaded from file).

  • autoload (bool, optional) – Whether to load samples and simulation order from file (if possible).

  • distance (str, optional) – Distance metric used for sorting. Defaults to ‘cityblock’. See scipy.spatial.distance.cdist for options.

  • algorithm (str, optional) – Algorithm used for sorting. Defaults to ‘nearest neighbor’. Note that neirest neighbor is a greedy algorithm that is known to result, on average, in paths 25% longer than the shortest path.

evaluate(notify=0, file=None, autosave=0, autoload=False, convergence_model=None, **kwargs)[source]#

Evaluate metrics over the loaded samples and save values to table.

Parameters:
  • notify=0 (int, optional) – If 1 or greater, notify elapsed time after the given number of sample evaluations.

  • file (str, optional) – Name of file to save/load pickled evaluation results.

  • autosave (int, optional) – If 1 or greater, save pickled evaluation results after the given number of sample evaluations.

  • autoload (bool, optional) – Whether to load pickled evaluation results from file.

  • convergence_model (ConvergencePredictionModel, optional) – A prediction model for accelerated system convergence. Defaults to no convergence model and the last solution as the initial guess for the next scenario. If a string is passed, a ConvergenceModel will be created using that model type.

  • kwargs (dict) – Any keyword arguments passed to biosteam.System.simulate().

Warning

Any changes made to either the model or the samples will not be accounted for when autoloading and may lead to misleading results.

metrics_at_baseline()[source]#

Return metric values at baseline sample.

evaluate_across_coordinate(name, f_coordinate, coordinate, *, xlfile=None, notify=0, notify_coordinate=True, multi_coordinate=False, convergence_model=None, f_evaluate=None)[source]#

Evaluate across coordinate and save sample metrics.

Parameters:
  • name (str or tuple[str]) – Name of coordinate

  • f_coordinate (function) – Should change state of system given the coordinate.

  • coordinate (array) – Coordinate values.

  • xlfile (str, optional) – Name of file to save. File must end with “.xlsx”

  • rule (str, optional) – Sampling rule. Defaults to ‘L’.

  • notify (bool, optional) – If True, notify elapsed time after each coordinate evaluation. Defaults to True.

  • f_evaluate (callable, optional) – Function to evaluate model. Defaults to evaluate method.

spearman_r(parameters=None, metrics=None, filter=None, **kwargs)[source]#

Return two DataFrame objects of Spearman’s rho and p-values between metrics and parameters.

Parameters:
  • parameters (Iterable[Parameter], optional) – Parameters to be correlated with metrics. Defaults to all parameters.

  • metrics (Iterable[Metric], optional) – Metrics to be correlated with parameters. Defaults to all metrics.

  • filter (Callable(x, y) -> x, y, or string, optional) –

    Function that accepts 1d arrays of x and y values and returns filtered x and y values to correlate. May also be one of the following strings:

    • ’none’: no filter

    • ’omit nan’: all NaN values are ignored in correlation

    • ’propagate nan’: NaN values return NaN correlation results

    • ’raise nan’: NaN values will raise a ValueError

  • **kwargs – Keyword arguments passed to scipy.stats.spearmanr().

pearson_r(parameters=None, metrics=None, filter=None, **kwargs)[source]#

Return two DataFrame objects of Pearson’s rho and p-values between metrics and parameters.

Parameters:
  • parameters (Iterable[Parameter], optional) – Parameters to be correlated with metrics. Defaults to all parameters.

  • metrics (Iterable[Metric], optional) – Metrics to be correlated with parameters. Defaults to all metrics.

  • filter (Callable(x, y) -> x, y, or string, optional) –

    Function that accepts 1d arrays of x and y values and returns filtered x and y values to correlate. May also be one of the following strings:

    • ’none’: no filter

    • ’omit nan’: all NaN values are ignored in correlation

    • ’propagate nan’: NaN values return NaN correlation results

    • ’raise nan’: NaN values will raise a ValueError

  • **kwargs – Keyword arguments passed to scipy.stats.pearsonr().

kendall_tau(parameters=None, metrics=None, filter=None, **kwargs)[source]#

Return two DataFrame objects of Kendall’s tau and p-values between metrics and parameters.

Parameters:
  • parameters (Iterable[Parameter], optional) – Parameters to be correlated with metrics. Defaults to all parameters.

  • metrics (Iterable[Metric], optional) – Metrics to be correlated with parameters. Defaults to all metrics.

  • filter (Callable(x, y) -> x, y, or string, optional) –

    Function that accepts 1d arrays of x and y values and returns filtered x and y values to correlate. May also be one of the following strings:

    • ’none’: no filter

    • ’omit nan’: all NaN values are ignored in correlation

    • ’propagate nan’: NaN values return NaN correlation results

    • ’raise nan’: NaN values will raise a ValueError

  • **kwargs – Keyword arguments passed to scipy.stats.kendalltau().

kolmogorov_smirnov_d(parameters=None, metrics=None, thresholds=[], filter=None, **kwargs)[source]#

Return two DataFrame objects of Kolmogorov–Smirnov’s D and p-values with the given thresholds for the metrics.

For one particular parameter, all of the sampled values will be divided into two sets, one where the resulting metric value is smaller than or equal to the threshold, and the other where the resulting value is larger than the threshold.

Kolmogorov–Smirnov test will then be performed for these two sets of values for the particular parameter.

Parameters:
  • parameters (Iterable[Parameter], optional) – Parameters to be correlated with metrics. Defaults to all parameters.

  • metrics (Iterable[Metric], optional) – Metrics to be correlated with parameters. Defaults to all metrics.

  • thresholds (Iterable[float]) – The thresholds for separating parameters into sets.

  • filter (Callable(x, y) -> x, y, or string, optional) –

    Function that accepts 1d arrays of x and y values and returns filtered x and y values to correlate. May also be one of the following strings:

    • ’none’: no filter

    • ’omit nan’: all NaN values are ignored in correlation

    • ’propagate nan’: NaN values return NaN correlation results

    • ’raise nan’: NaN values will raise a ValueError

  • **kwargs – Keyword arguments passed to scipy.stats.kstest().

__call__(sample, **kwargs)[source]#

Return pandas Series of metric values at given sample.

show(p=None, m=None)[source]#

Return information on p-parameters and m-metrics.