You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Method: Grid, Random (uniform), Halton (quasi-random), Constant/Fixed (could be multiple)
Advantage: Sampling method can be changed easily, Encoder mostly depends on the spacing, adding new distributions or methods can be done independently
Connection: Each sampling method assumes a linear spacing, a transformation / encoder is used to map it to the specified distribution
four types of variables
Input - relevant dimensions for sampling and active learning
Output
Constant - ignored in Surrogate but usable in the Worker, possibly allow a list of constants and each step shall be run for each set of constants in parallel?
Independent - fixed values, expanded to axis or reduced to index in Surrogate
Active Learning is a separate section, and not a method
uses the sampling method for the warmup phase
searches and proposes new samples for the specified variables, takes the sampling method for all others
load user specified parameters and old data
discrete variables
function valued variables
high dimensionality
high number of support points
support points follow a spacing / distribution themselves, usually continuous
best practice: decomposition / dimensionality reduction
alternative for function outputs: surrogate treats the independent dimension just as another input dimension
for inputs: how to specify constraints, how to sample
idea (by Maximilian): specify import(profiles.nc) to load example profiles, then run a PCA and sample from the components
tensor valued variables
low dimensionality - indices, discrete
can be uncorrelated
sometimes dimensionality reduction is possible
shouldn’t need an additional index variable - only specify shape
some variables are connected differently
most prominent: mean & error
Use case: heteroscedastic noise
should this relation be specified? If so, how?
Proposed example configuration
variables:
u: Uniform(min, max) # ~ Linearv: Logarithmic(min, max, base=10) # ~ Log, LogUniformx: Gaussian(mean, std) # ~ Normaly: - Logarithmic
- Gaussian(mean, std) # ~ LogGaussianf: - import(profile.nc)
- PCA(n or tol)sampling:
n: 10Random: u, v # ~ UniformHalton: x, yGrid: factivelearning:
n: 30variables: x, y, f # u, v sampled randomly
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Gathering a few ideas and observations related to Variables
Requirements
Current Status
Ideas for Improvement
import(profiles.nc)
to load example profiles, then run a PCA and sample from the componentsProposed example configuration
Beta Was this translation helpful? Give feedback.
All reactions