-
-
Notifications
You must be signed in to change notification settings - Fork 271
cadcad specification (DRAFT)
Suppose
The system model is a generalized dynamical system if
there is a map
Our set of generalized dynamical systems is
Now let's suppose that we wish to decompose the operator
such that
Define a state transition function (aka mechanism) as
for any selection
this gives us a single "state update block".
Define a set of state update blocks
Sidebar
Then we will call each
where
A run is a sequence
this gives us a loop:
for
$\qquad \qquad$if
What is actually retained?
in the most expressive version we can store
next less would be
then the even more parsimonious would be
and if desired one could choose a different middleground with specific outputs
Potential future feature is to allow flagging of partial state updates to have their outputs stored and/or flag specific states to have their outputs stored even after the simulation itself will no longer reference them directly.
if we think of cadCAD as similar to ML algorithms in the sense that there is a valid sequence of transformers we can define a classes:
- functions
$h\in \mathcal{D}$ - data structures
$\wp(\mathcal{X})$ - functions
$g$ and$f$ - assertions of the form
$A: \mathcal{X} \rightarrow {0,1}$ which select a subspace$\mathcal{X}^C \subset \mathcal{X}$ - assertions of the form
$a: \mathcal{X} \times \mathcal{X} \rightarrow {0,1}$ which verifies the validity of a state transition$X^+ = h(X)$ , eg$V(X^+)==V(X)$ .
(suggestions)
- State objects (json-like with nested key value pairs, and optionally assertions on the relationships between values, eg sum of agent tokens = supply of token)
- State update functions (operators of type
$h:\mathcal{X} \rightarrow \mathcal{X}$ , which are created by combining input operators$u=g(X)$ and$X^+=f(X, u)$ where the domains and ranges match such that$X^+=f(X, g(X))$ maps$\mathcal{X}$ to$\mathcal{X}$ . - State transition functions
$f(X,u)$ take a prior state$X\in \mathcal{X}$ and an input in$u\in\mathcal{U}$ and return a posterior state$X^+\in\mathcal{X}$ . There may be state, time or actor dependent restrictions captured by the set$\mathcal{U}$ . the mechanism is passive if applying an empty action returns the prior state:$X^+=f(X, \emptyset ) = X$ . - Input functions
$u = g(X)$ which take the state object as their domain and whose codomain is a space$\mathcal{U}$ which may also gave restrictions. This operator has a variety of special cases:- i) Environmental processes are generally modeled as random variablies which will drive stochastic processes in the associated functions
$f(X,u)$ . - ii) Behavioral models use
$u$ to represent actions taken by actors outside the control of systems engineer, depending on the application the admissible action set$\mathcal{U}$ may be restricted in a state dependent manner, - iii) Control policies are feedback processes which adapt the system according to runs defined by the systems engineer which the intent to enforce an invariant or steer the system with respect to an objective.
- iv) Input functions may simply reads of external data sets or streams from other runtimes.
- i) Environmental processes are generally modeled as random variablies which will drive stochastic processes in the associated functions
- A sequence of operations which are state update functions may be bundled into a sequence which is collectively a valid state update function, this pipeline is what we call the "partial state update blocks" and applying the whole sequence is the state update function.
Note that for testing purposes nearly everything about a simulation can be checked using pre and post conditions on the state objects
- General assertion about the state object itself which could be checked in debugging or automating testing mode at every point an object
$X\in \mathcal{X}$ is either referenced or returned. - Specific assertion about the local invariants
$V(X^+)=V(X)$ (or others such as$V(X^+)< V(X)$ , or any boolean) under a particular operation$h(X)$ . - Specific assertion about the admissibility of a particular action
$u\in \mathcal{U}$ , most commonly constructed as checking whether the image of some action$u$ results in$f(X,u)\in \mathcal{X}$ . In theory other assertions are possible as well
Broadly speaking since we're doing this over arbitrary data structures, another family of assertions we could explore are checking units, if it is the case that fields the state can be said to have units, and if units themselves have well defined relations.
This example is drawn from the credCastle model being worked on by one cadCAD community "study group"
Define the state objects top level keys and assign them data types
state_schema = {
'castle': int,
'castle_token_price': float,
'castle_token_supply': int,
'castle_token_holdings': list
}
The create an a genesis state:
init_holdings = [np.random.randint(0,2) for i in range(n)]
sum_of_init_holdings = sum(init_holdings)
genesis_state = {
'castle': 1,
'castle_token_price': 1.2,
'castle_token_supply':sum_of_init_holdings,
'castle_token_holdings': init_holdings
}
Our state is defined by a set of keys declared in the schema.
keys = state_schema.keys()
print(keys)
dict_keys(['castle', 'castle_token_price', 'castle_token_supply', 'castle_token_holdings'])
Our genesis state has the same keys!
print(genesis_state.keys()==keys)
True
We also see that our genesis state respects the schema!
for k in keys:
print(type(genesis_state[k])==state_schema[k])
True
True
True
This simple example demonstrates the idea that we can declare boolean functions of over our state objects to verify that they are in fact in our statespace.
Also possible
- more sophisticated schemas (including classes and nested structures)
- logics that run contingent on the violation of these rules, which project back into the domain
$\mathcal{X}$ or cause actions to fail and revert to the prior state.
In the cadCAD simulation methodology, we operate on four layers: Policies, Mechanisms, States, and Metrics. Information flows do not have explicit feedback loop unless noted. Policies determine the inputs into the system dynamics, and can come from user input, observations from the exogenous environment, or algorithms. Mechanisms are functions that take the policy decisions and update the States to reflect the policy level changes. States are variables that represent the system quantities at the given point in time, and Metrics are computed from state variables to assess the health of the system. Metrics can often be thought of as KPIs, or Key Performance Indicators.
At a more granular level, to setup a model, there are system conventions and configurations that must be followed.
The way to think of cadCAD modeling is analogous to machine learning pipelines which normally consist of multiple steps when training and running a deployed model. There is preprocessing, which includes segregating features between continuous and categorical, transforming or imputing data, and then instantiating, training, and running a machine learning model with specified hyperparameters. cadCAD modeling can be thought of in the same way as states, roughly translating into features, are fed into pipelines that have built-in logic to direct traffic between different mechanisms, such as scaling and imputation. Accuracy scores, ROC, etc are analogous to the metrics that can be configured on a cadCAD model, specifying how well a given model is doing in meeting its objectives. The parameter sweeping capability of cadCAD can be thought of as a grid search, or way to find the optimal hyperparameters for a system by running through alternative scenarios. A/B style testing that cadCAD enables is used in the same way machine learning models are A/B tested, except out of the box, in providing a side by side comparison of muliple different models to compare and contract performance. Utilizing the field of Systems Identification, dynamical systems models can be used to "online learn" by providing a feedback loop to generative system mechanisms.
The flexibility of cadCAD also enables the embedding of machine learning models into behavior policies or mechanisms for complex systems with an machine learning prediction component.
System Dynamics is a modeling paradigm used to model the nonlinear behavior of complex systems using flows, stocks, and feedback loops. Systems Dynamics modeling is very useful in modeling population flows, financial statements, etc but has a limited ability to represent complex agent and system interactions.
An example model for understanding dynamical system is the commonly used Lotka–Volterra Prey-Predator model, which is a pair of first order nonlienar differentional equations that is used to describe the dynamics of two species interating, one which is a predator the other which is the prey. We can model the population changes over time.
It is based on the following[2,3]
\begin{aligned}{\frac {dx}{dt}}&=\alpha x-\beta xy,\{\frac {dy}{dt}}&=\delta xy-\gamma y,\end{aligned}
Where:
-
$x$ is the number of prey -
$y$ is the number of some predator -
$\frac{dx}{dt}$ and$\frac{dy}{dt}$ represent the instantaneous growth rates of -
$t$ represents time - α, β, γ, δ are positive real parameters describing the interaction of the two species.
The most prominent feature of it is the existence, depending on the choice of parameters, of a repeatable cycle around a fixed point which creates a dynamical equilibrium between the number of preys and predators on a system.
partial_state_update_block = [
{
'policies': {
'reproduce_prey': p_reproduce_prey,
'reproduce_predators': p_reproduce_predators,
'eliminate_prey': p_eliminate_prey,
'eliminate_predators': p_eliminate_predators
},
'variables': {
'prey_population': s_prey_population,
'predator_population': s_predator_population
}
}
]
- Fast-performing, allowing a very large number of timesteps and simulations
- Easy to prototype and to add/modify mechanisms
- Easy to insert a multitude of complex factors
- The output is usually easy to visualize
Agent based modeling is a modeling paradigm to simulate the interaction of autonoumous agents and their results on the underlying system. An example of Agent Based Modeling is modeling secondary market behavior of individual actors, such as traders, long-term investors, and liquidity providers.
Using the same Predator Prey model defined above in the Systems Dynamics Example, we'll adopt a model based on a grid world, on which preys and predators take the following actions at each timestep of their lifes:
- Food is grown on every site.
- All agents digest some of the food on their stomach and get older.
- All agents move (if possible) to an available random neighboring location.
- The agents reproduce themselves if there is an available partner nearby
- The prey agents feed on the available food
- The predator agents hunts the nearby preys
- All old enough agents die
There is an inherent stochastic nature on this model, and every time that you run it, we'll have a completely different result for the same parameters. But we can see that there is sort of a random equilibrium that converges to the dynamical equilibrium which we presented on the dynamical simulation.
partial_state_update_block = [
{
'policies': {
'grow_food': p_grow_food
},
'variables': {
'sites': s_update_food
}
},
{
'policies': {
'increase_agent_age': p_digest_and_olden
},
'variables': {
'agents': s_agent_food_age
}
},
{
'policies': {
'move_agent': p_move_agents
},
'variables': {
'agents': s_agent_location
}
},
{
'policies': {
'reproduce_agents': p_reproduce_agents
},
'variables': {
'agents': s_agent_create,
}
},
{
'policies': {
'feed_prey': p_feed_prey
},
'variables': {
'agents': s_agent_food,
'sites': s_site_food
}
},
{
'policies': {
'hunt_prey': p_hunt_prey
},
'variables': {
'agents': s_agent_food
}
},
{
'policies': {
'natural_death': p_natural_death
},
'variables': {
'agents': s_agent_remove
}
}
]
- Are conceptually closer to experience, making it easier to explain to someone with no previous background
- Easier to generate complex behavior with simple rules
- Generates more granular and detailed information
Grassroots Economics has created a Community Currency to help alleviate the liqudity crisis of rural Kenya. BlockScience created a graph based dynamical system model in order to provide a scaffold for Grassroot's economy planning, a subset of which is discussed below as an illustration of networked model types.
For networked, graph models evolving over time, assuming we have a directed graph
partial_state_update_block = [
'Behaviors': {
'policies': {
'action': choose_agents
},
'variables': {
'network': update_agent_activity,
'outboundAgents': update_outboundAgents,
'inboundAgents':update_inboundAgents
}
},
'Spend allocation': {
'policies': {
'action': spend_allocation
},
'variables': {
'network': update_node_spend
}
}
]
In this example, during the spend_allocation, we calculate, based off of the desired interacting agents's demand, utility, and liquidity constraints, we iterate through the desired demand and allocate based on a stack ranking of utility
- Agent
$i$ does not go negative in their funds. - All edges the that agent
$i$ is connected to have been stacked ranked by utility and demand.
- Represent complex relationships containing interaction data between mutliple agents
- Networked models are an object type, ad a result, they can be used in conjuction with ABM and multiscale modeling approaches for modeling detailed interactions effeciently.
Multiscale Modeling is a type of modeling over multiple scales of time or space to describe a system, or spatio-temporal scales. An example of a multiscale model is the Conviction Voting, a a novel decision making process where votes express their preference for which proposals they would like to see approved in a continuous rather than discrete way. The longer the community keeps a preference on an individual proposal, the “stronger” the proposal conviction becomes. In the conviction voting model a graph structure is used to record the the introduction and removal of participants, candidates, proposals, and their outcomes. The complexity and different scales represented that cadCAD is able to model.
- Ability on multiple spatio-temporal scales.
- Nonlinear dynamics and feedback effects with emergent properties
- Realistic system complexity in engineering, control systems, and economics models.
This section contains terms and definitions
Table Term/Concept, Math notation, Definition/description
carefully define some extra terms
- object (as in arbitrary data structure)
- a sequence of objects is a stream (general discrete)
- special case is a point (as in a vectorspace)
- with special case sequence called a signal (has both continuous and discrete variants)
need to work on a breakout on "time" representations
- continuous time
- event sequences (partial orders)
- discrete time (sampled continuous time versus strict order of events)
Term | Notation | Definition | Relations |
---|---|---|---|
State | an object or point representing the current configuration of the system | ||
Statespace | a data structure or space containing all possible values of |
||
State Update Map | a map which takes the current state |
||
Input | an object representing an input to the system |
||
Admissible Input Space | a data structure or space containing all possible values of |
||
Input Space | a data structure or space containing all possible values of |
||
Admissible Input Map | a map which takes the current state |
||
Input Map | a map which takes the current state |
||
State Transition Map | a map which takes the current state |
|
|
Posterior State | the state the system arrives at after applying the state transition function |
||
Trajectory | A sequence of points in |
||
Generalized Dynamical System | A map |
|
Suppose a system comprised of a King moving on an empty chess board. The chess board is 8x8 squares, and the king can move one square in any direction (horizontally, vertically, or diagonally).
-
$X$ is a tuple representing the king's position on the board. Let's define a convention where:- (0,0): bottom left square
- (0,7): upper left square
- (7,0): bottom right square
- (7,7): upper right square
-
$\mathcal{X}$ , being the set of all possible values of$X$ , is the set of all the 64 squares of the board: {(0,0),(0,1),...(0,7),(1,0),(1,1)...(7,6),(7,7)} -
$u$ is the king's move. Let's define a convention where the move is described by a tuple where each element is the number of squares the king moves along the corresponding axis, eg.- (0,1): move up
- (-1,0): move left
- (1,1): move diagonally to the upper right
-
$\mathcal{V}$ is the set of all 9 possible values of$u$ : {(-1,-1),(0,-1),(1,-1)(-1,0),(0,0),(1,0)(-1,1),(0,1),(1,1)} -
$\mathcal{U}$ , the admissible input space, depends on the current position of the king ($X$ ). For example, if the king is at (0,0) it can't move to the left or down; it's input space is restricted to {(0,0),(0,1),(1,1),(1,0)}. The admissible input map$U$ is a mapping function that formalizes such restrictions for all possible$X$ . -
$g$ , the input function, selects one of the possible$u$ from$\mathcal{U}$ -
$f$ , the state update function, computes the new position of the king after a move. If we define$X=(x_i,x_j)$ and$u=(\Delta{x_i},\Delta{x_j})$ , then$f(X,u) = (x_i+\Delta{x_i},x_j+\Delta{x_j})$ . Notice that because$u$ is selected from$\mathcal{U}$ , no further checks on the validity of$u$ need to be performed when$f$ is evaluated.
Term | Notation | Definition |
---|---|---|
Term | Notation | Definition |
---|---|---|
+invariants, other more advanced stuff eg
+fixed points and neighborhoods, design for convergence properties (eg estimation,sensemaking)
+games, composed games, and path planning problems
- https://hackmd.io/@OCPoXLLVQvyCK3HvlpBEXg/SkY7VvQnV?type=view
- [2] Lotka, A. J. 1925. Elements of physical biology. Baltimore: Williams & Wilkins Co.
- [3] Volterra, V. 1926. Variazioni e fluttuazioni del numero d'individui in specie animali conviventi. Mem. R. Accad. Naz. dei Lincei. Ser. VI, vol. 2.