diff --git a/.conda/meta.yaml b/.conda/meta.yaml index d4368e1ee..cdd7b5ea1 100644 --- a/.conda/meta.yaml +++ b/.conda/meta.yaml @@ -28,7 +28,7 @@ requirements: - cloudpickle - joblib - numpy >=1.16 - - pandas >=0.24,<=1.0.5 + - pandas >=0.24 - bokeh >=1.3 - scipy - fuzzywuzzy diff --git a/docs/environment.yml b/docs/environment.yml index fd36fb814..f9625ba37 100644 --- a/docs/environment.yml +++ b/docs/environment.yml @@ -5,7 +5,6 @@ dependencies: - ipython - pip: - sphinx - - parso==0.8.0 - pydata-sphinx-theme>=0.3.0 - nbsphinx - sphinxcontrib-bibtex diff --git a/docs/source/conf.py b/docs/source/conf.py index 70b5f3591..3b23a6e45 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -103,7 +103,7 @@ # built documents. # # The full version, including alpha/beta/rc tags. -release = "0.1.1" +release = "0.1.2" version = ".".join(release.split(".")[:2]) # The language for content autogenerated by Sphinx. Refer to documentation diff --git a/docs/source/getting_started/first_optimization_with_estimagic.ipynb b/docs/source/getting_started/first_optimization_with_estimagic.ipynb index 190d43f7b..71c4fa638 100644 --- a/docs/source/getting_started/first_optimization_with_estimagic.ipynb +++ b/docs/source/getting_started/first_optimization_with_estimagic.ipynb @@ -16,9 +16,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# First optimization with estimagic\n", + "# First Optimization with estimagic\n", "\n", - "This tutorial shows how to do an optimization with estimagic. It uses a very simple criterion function in order to focus on the mechanics of doing an optimization. A more interesting example can be found in the [ordered logit example](ordered_logit_example.ipynb). More details on the topics covered here can be found in the [how to guides](../how_to_guides/index.html)." + "This tutorial shows how to do an optimization with estimagic. It uses a very simple criterion function in order to focus on the mechanics of doing an optimization. A more interesting example can be found in the [ordered logit example](ordered_logit_example.ipynb). More details on the topics covered here can be found in the [how to guides](../how_to_guides/index.rst)." ] }, { @@ -29,7 +29,7 @@ "\n", "Criterion functions in estimagic take a DataFrame with parameters as first argument and return a dictionary that contains the output of the criterion function. \n", "\n", - "The output dictionary must contain the entry \"value\", which is a scalar but can also contain an arbitrary number of additional entries. Entries with special meaning are \"contributions\" and \"root_contributions\", which are used by specialized optimizers (e.g. nonlinear least squares optimizers use the \"root_contributions\"). All other entries are simply stored in a log file. If none of the optional entries is required, the criterion function can also simply return a scalar. " + "The output dictionary must contain the entry \"value\", which is a scalar but can also contain an arbitrary number of additional entries. Entries with special meaning are \"contributions\" and \"root_contributions\", which are used by specialized optimizers (e.g. nonlinear least squares optimizers use the \"root_contributions\"). All other entries are simply stored in a log file. If none of the optional entries are required, the criterion function can also simply return a scalar. " ] }, { @@ -159,7 +159,7 @@ "source": [ "## Running a simple optimization\n", "\n", - "Estimagic's `minimize` function is works similar to scipy's `minimize` function. A big difference is however, that estimagic does not have a default optimization algorithm. This is on purpose, because the algorithm choice should always be dependent on the problem one wants to solve. \n", + "Estimagic's `minimize` function works similarly to scipy's `minimize` function. A big difference is however, that estimagic does not have a default optimization algorithm. This is on purpose, because the algorithm choice should always be dependent on the problem one wants to solve. \n", "\n", "Another difference is that estimagic also has a `maximize` function that works exactly as `minimize`, but does a maximization. \n", "\n", @@ -299,7 +299,7 @@ "source": [ "## Running an optimization with a least squares optimizer\n", "\n", - "Using a least squares optimizer in estimagic is exactly the same as using another optimizer. That was the whole point of allowing for a dictionary as output of the criterion function. " + "Using a least squares optimizer in estimagic is exactly the same as using another optimizer. That is the goal and result of allowing the output of the criterion function to be a dictionary. " ] }, { @@ -398,7 +398,7 @@ "source": [ "## Adding bounds\n", "\n", - "Bounds are simply added as additional columns in the start parameters. If a parameter has no bound, use np.inf for upper bounds and -np.inf for lower bounds. " + "Bounds are simply added as additional columns in the start parameters. If a parameter has no bound, use `np.inf` for upper bounds and `-np.inf` for lower bounds. " ] }, { @@ -600,7 +600,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "As you probably suspect, the estimagic constraint syntax is way more general than what we just did. For details see [how to specify constraints](../how_to_guides/optimization/how_to_specify_constraints.html)" + "As you probably suspect, the estimagic constraint syntax is much more general than what we just did. For details see [how to specify constraints](../how_to_guides/optimization/how_to_specify_constraints.rst)" ] }, { @@ -609,7 +609,7 @@ "source": [ "## Using and reading persistent logging\n", "\n", - "In fact, you have already used a persistent log the whole time. It is stored under \"logging.db\" in your working directory. If you want to store it in a different place, you can do that:" + "In fact, we have already been using a persistent log the whole time. It is stored under \"logging.db\" in our working directory. If you want to store it in a different place, you can do that:" ] }, { @@ -670,7 +670,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The persistent log file is always instantly synchronized when the optimizer tries a new parameter vector. This is very handy if an optimization has to be aborted and you want to extract the current status. It is also used by the [estimagic dashboard](../how_to_guides/optimization/how_to_use_the_dashboard.html). " + "The persistent log file is always instantly synchronized when the optimizer tries a new parameter vector. This is very handy if an optimization has to be aborted and you want to extract the current status. It is also used by the [estimagic dashboard](../how_to_guides/optimization/how_to_use_the_dashboard.rst). " ] }, { @@ -679,7 +679,7 @@ "source": [ "## Passing algorithm specific options to minimize\n", "\n", - "Most algorithms have a few optional arguments. Examples are convergence criteria or tuning parameters of the algorithm. We standardize the names of these options as much as possible, but not all algorithms support all options. You can find an overview of supported arguments [here](../how_to_guides/optimization/how_to_specify_algorithm_and_algo_options.html)." + "Most algorithms have a few optional arguments. Examples are convergence criteria or tuning parameters. We standardize the names of these options as much as possible, but not all algorithms support all options. You can find an overview of supported arguments [here](../how_to_guides/optimization/how_to_specify_algorithm_and_algo_options.rst)." ] }, { @@ -795,7 +795,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.8.6" } }, "nbformat": 4, diff --git a/docs/source/getting_started/index.rst b/docs/source/getting_started/index.rst index e21e5ae45..1475d4f00 100644 --- a/docs/source/getting_started/index.rst +++ b/docs/source/getting_started/index.rst @@ -4,7 +4,7 @@ Getting Started This section is a quick introduction to the basic functionality of estimagic. -Contains not only introductions to estimagic, but also very brief introductions to +It contains not only introductions to estimagic, but also very brief introductions to numerical optimization and differentiation. .. toctree:: diff --git a/docs/source/getting_started/installation.rst b/docs/source/getting_started/installation.rst index 4867f97e5..46b01e466 100644 --- a/docs/source/getting_started/installation.rst +++ b/docs/source/getting_started/installation.rst @@ -10,6 +10,7 @@ The package can be installed via conda. To do so, type the following commands in a terminal or shell: ``$ conda config --add channels conda-forge`` + ``$ conda install -c opensourceeconomics estimagic`` The first line adds conda-forge to your conda channels. This is necessary for @@ -20,9 +21,9 @@ and its mandatory dependencies. Installing optional dependencies ================================ -Only the ``scipy`` optimizers are a mandatory dependency of estimagic. Other algorithms +Only ``scipy`` is a mandatory dependency of estimagic. Other algorithms become available if you install more packages. We make this optional because most of the -time you will use at least one additional package, but only very rarely you will need all +time you will use at least one additional package, but only very rarely will you need all of them. @@ -32,14 +33,14 @@ see :ref:`list_of_algorithms`. To enable all algorithms at once, do the following: -``conda install nlopt`` +.. ``conda install nlopt`` ``pip install Py-BOBYQA`` -``pip install DFOLS`` +``pip install DFO-LS`` -``conda install petsc4py`` (Not available on windows) +``conda install petsc4py`` (Not available on Windows) -``conda install cyipopt`` (Not available on windows) +.. ``conda install cyipopt`` (Not available on Windows) -``conda install pygmo`` +.. ``conda install pygmo`` diff --git a/docs/source/getting_started/ordered_logit_example.ipynb b/docs/source/getting_started/ordered_logit_example.ipynb index 95e9424fe..de8b59b33 100644 --- a/docs/source/getting_started/ordered_logit_example.ipynb +++ b/docs/source/getting_started/ordered_logit_example.ipynb @@ -545,7 +545,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This looks pretty good! The parameter estimates line up perfectly. I actually had to try three optimizers to get at least one differenet digit which makes the result more credible. Other optimizers like `nlopt_bobyqa` and `nlopt_neledermead` hit it on all digits!\n", + "This looks pretty good! The parameter estimates line up perfectly. I actually had to try three optimizers to get at least one differenet digit which makes the result more credible. Other optimizers hit it on all digits.\n", "\n", "
\n", "Note that standard error calculation, especially in combination with constraints is still considered experimental in estimagic.\n", diff --git a/docs/source/getting_started/visualizing_an_optimization_problem.ipynb b/docs/source/getting_started/visualizing_an_optimization_problem.ipynb index 01ce92e9d..75e78eeb9 100644 --- a/docs/source/getting_started/visualizing_an_optimization_problem.ipynb +++ b/docs/source/getting_started/visualizing_an_optimization_problem.ipynb @@ -17,7 +17,7 @@ "source": [ "# Visualizing an Optimization Problem\n", "\n", - "In order to choose the right optimization algorithm, it is important to know as much as possible about an optimization problems. If the criterion function has only one or two parameters, plotting it over the sample space can be helpful. However, such low dimensional problems are rare. \n", + "In order to choose the right optimization algorithm, it is important to know as much as possible about the problem one wants to solve. If the criterion function has only one or two parameters, plotting it over the sample space can be helpful. However, such low dimensional problems are rare. \n", "\n", "In this notebook we show how higher dimensional functions can be visualized using estimagic and which properties of the criterion function can be learned from them. \n", "\n", @@ -87,10 +87,10 @@ "source": [ "The plot gives us the following insights:\n", "\n", - "- No matter at which value the other parameters are, there seems to be a minimum at 0 for each parameter. Thus coordinate descent is probably a good strategy. \n", - "- There is no sign of local optima \n", - "- There is no sign of noise or non-differentiablities (careful, grid might not be fine enough)\n", - "- The problem seems to be convex\n", + "- No matter at which value the other parameters are, there seems to be a minimum at 0 for each parameter. Thus, coordinate descent is probably a good strategy. \n", + "- There is no sign of local optima. \n", + "- There is no sign of noise or non-differentiablities (careful, grid might not be fine enough).\n", + "- The problem seems to be convex.\n", "\n", "-> We would expect almost any derivative based optimizer to work well here (which we know to be correct in that case)" ] @@ -158,13 +158,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Here the picture looks very differently\n", + "Here the picture looks very differently:\n", "\n", "- The minimum along each coordinate seems to strongly depend on the value of all other parameters. This means that fixing some parameters initially and only optimizing the rest would not work well. \n", - "- There is no sign of noise or non-differentiablities\n", - "- There are several local optima\n", + "- There is no sign of noise or non-differentiablities.\n", + "- There are several local optima.\n", "\n", - "-> We would expect that a gradient based optimizer is efficient at finding local optima. Thus a good strategy would be to run gradient based optimizers from many starting values." + "-> We would expect that a gradient based optimizer is efficient at finding local optima. Thus, a good strategy would be to run gradient based optimizers from many starting values." ] } ], diff --git a/docs/source/getting_started/which_optimizer_to_use.ipynb b/docs/source/getting_started/which_optimizer_to_use.ipynb index 26d3f1522..fb3d217f5 100644 --- a/docs/source/getting_started/which_optimizer_to_use.ipynb +++ b/docs/source/getting_started/which_optimizer_to_use.ipynb @@ -17,13 +17,15 @@ "source": [ "# Which optimizer to use\n", "\n", - "This is the very very very short guide on selecting a suitable optimization algorithm based on a minimum of information. We are working on a longer version that contains more background information and can be found [here](../how_to_guides/optimization/how_to_choose_optimizer.html). \n", + "This is the very very very short guide on selecting a suitable optimization algorithm based on a minimum of information. We are working on a longer version that contains more background information and can be found [here](../how_to_guides/optimization/how_to_choose_optimizer.rst). \n", "\n", "However, we will also keep this short guide for very impatient people who feel lucky enough. \n", "\n", - "To select an optimizer, you need to answer three questions:\n", - "1. Is your criterion function differentiable\n", - "2. Do you have a nonlinear least squares structure (i.e. do you sum some kind of squared residuals at the end of your criterion function). " + "To select an optimizer, you need to answer two questions:\n", + "\n", + "1. Is your criterion function differentiable?\n", + "\n", + "2. Do you have a nonlinear least squares structure (i.e. do you sum some kind of squared residuals at the end of your criterion function)?" ] }, { @@ -386,7 +388,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.8.6" } }, "nbformat": 4, diff --git a/docs/source/getting_started/why_optimization_is_hard.ipynb b/docs/source/getting_started/why_optimization_is_hard.ipynb index 1d6b47894..f1e8e0846 100644 --- a/docs/source/getting_started/why_optimization_is_hard.ipynb +++ b/docs/source/getting_started/why_optimization_is_hard.ipynb @@ -20,19 +20,19 @@ "\n", "This tutorial shows why optimization is difficult and you need some knowledge in order to solve optimization problems efficiently. It is meant for people who have no previous experience with numerical optimization and wonder why there are so many optimization algorithms and still none that works for all problems. For each potential problem we highlight, we also give some ideas on how to solve it. \n", "\n", - "If you simply want to know how to choose an optimization algorithm, check out [our very brief guide](which_optimizer_to_use.ipynb) or the [longer version](../how_to_guides/optimization/how_to_choose_optimizer.html).\n", + "If you simply want to know how to choose an optimization algorithm, check out [our very brief guide](which_optimizer_to_use.ipynb) or the [longer version](../how_to_guides/optimization/how_to_choose_optimizer.rst).\n", "\n", "\n", - "If you simply want to learn the mechanics of doing optimization with estimagic, check out the [first optimization with estimagic tutorial](first_optimization_with_estimagic)\n", + "If you simply want to learn the mechanics of doing optimization with estimagic, check out the [first optimization with estimagic tutorial](first_optimization_with_estimagic.ipynb)\n", "\n", "\n", "The message of this notebook can be summarized as follows:\n", "\n", - "- The only algorithms that are guaranteed to solve all problems are grid search or other algorithms that evaluate the criterion function almost everywhere in the parameter space\n", - "- If you have more than a hand full of parameters these methods would take too long\n", - "- Thus you have to know the properties of your optimization problem and choose and have knowledge on different optimization algorithms in order to choose the right algorithm for your problem. \n", + "- The only algorithms that are guaranteed to solve all problems are grid search or other algorithms that evaluate the criterion function almost everywhere in the parameter space.\n", + "- If you have more than a hand full of parameters these methods would take too long.\n", + "- Thus, you have to know the properties of your optimization problem and have knowledge on different optimization algorithms in order to choose the right algorithm for your problem. \n", "\n", - "This tutorial uses variants of the sphere function from the [first optimization with estimagic tutorial](first_optimization_with_estimagic) to illustrate problems. " + "This tutorial uses variants of the sphere function from the [first optimization with estimagic tutorial](first_optimization_with_estimagic.ipynb) to illustrate problems. " ] }, { @@ -55,7 +55,7 @@ "source": [ "## Why grid search is infeasible\n", "\n", - "Sampling based optimizers and grid search, require the parameter space to be bounded in all directions. Let's assume that we know the optimum of the sphere function is between -0.5 and 0.5, but don't know where it is. \n", + "Sampling based optimizers and grid search require the parameter space to be bounded in all directions. Let's assume that we know the optimum of the sphere function is between -0.5 and 0.5, but don't know where it is. \n", "\n", "In order to get a precision of 2 digits with a grid search, we require the following number of function evaluations (depending on the number of parameters)." ] @@ -105,7 +105,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Sampling based algorithms typically fix the number of criterion evaluations and try to spend them a bit smarter than searching completely randomly. However, the smart tricks only work under additional assumptions. Thus either, you need to make assumptions on your problem or you will get the curse of dimensionality through the backdoor again. For easier anaylis, assume we fix the number of function evaluations in a grid search instead of a sampling based algorithm and want to know which precision we can get, depending on the dimension:\n", + "Sampling based algorithms typically fix the number of criterion evaluations and try to spend them a bit smarter than searching completely randomly. However, the smart tricks only work under additional assumptions. Thus, either you need to make assumptions on your problem or you will get the curse of dimensionality through the backdoor again. For easier analysis, assume we fix the number of function evaluations in a grid search instead of a sampling based algorithm and want to know which precision we can get, depending on the dimension:\n", "\n", "For 1 million function evaluations, we can expect the following precision:" ] @@ -150,7 +150,7 @@ "source": [ "## How derivatives can solve the curse of dimensionality\n", "\n", - "Derivative based methods do not try to evaluate the criterion function everywhere in the search space. Instead they start at some point and go \"downhill\" from there. The gradient of the criterion function indicates which direction is down hill. Then there are different ways of determining how far to go in that direction. The time it takes to evaluate a derivative increases at most linearly in the number of parameters. Using the derivative information, optimizers can often find an optimum with very few function evaluations." + "Derivative based methods do not try to evaluate the criterion function everywhere in the search space. Instead they start at some point and go \"downhill\" from there. The gradient of the criterion function indicates which direction is downhill. Then there are different ways of determining how far to go in that direction. The time it takes to evaluate a derivative increases at most linearly in the number of parameters. Using the derivative information, optimizers can often find an optimum with very few function evaluations." ] }, { @@ -453,7 +453,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.8.6" } }, "nbformat": 4, diff --git a/environment.yml b/environment.yml index b74f24255..56bb6053d 100644 --- a/environment.yml +++ b/environment.yml @@ -7,7 +7,7 @@ channels: dependencies: - - python=3.7 + - python=3.8 - pip - anaconda-client - bokeh>=1.3 @@ -20,8 +20,8 @@ dependencies: - cloudpickle - jupyterlab - nbsphinx - - numpy=1.16 - - pandas<=1.0.5 + - numpy + - pandas - pdbpp - petsc4py>=3.11 - pytest diff --git a/estimagic/__init__.py b/estimagic/__init__.py index 4ef2fb9c4..def027c86 100644 --- a/estimagic/__init__.py +++ b/estimagic/__init__.py @@ -1,4 +1,4 @@ -__version__ = "0.1.1" +__version__ = "0.1.2" from estimagic.optimization.optimize import minimize # noqa: F401 diff --git a/estimagic/logging/database_utilities.py b/estimagic/logging/database_utilities.py index cbf2a8ee5..909078c5a 100644 --- a/estimagic/logging/database_utilities.py +++ b/estimagic/logging/database_utilities.py @@ -143,23 +143,18 @@ def make_optimization_status_table(database, if_exists="extend"): database.create_all(database.bind) -def make_optimization_problem_table(database, if_exists="extend"): +def make_optimization_problem_table( + database, if_exists="extend", save_all_arguments=True +): table_name = "optimization_problem" _handle_existing_table(database, table_name, if_exists) columns = [ Column("rowid", Integer, primary_key=True), Column("direction", String), - Column("criterion", PickleType(pickler=RobustPickler)), - Column("criterion_kwargs", PickleType(pickler=RobustPickler)), Column("params", PickleType(pickler=RobustPickler)), Column("algorithm", PickleType(pickler=RobustPickler)), - Column("constraints", PickleType(pickler=RobustPickler)), Column("algo_options", PickleType(pickler=RobustPickler)), - Column("derivative", PickleType(pickler=RobustPickler)), - Column("derivative_kwargs", PickleType(pickler=RobustPickler)), - Column("criterion_and_derivative", PickleType(pickler=RobustPickler)), - Column("criterion_and_derivative_kwargs", PickleType(pickler=RobustPickler)), Column("numdiff_options", PickleType(pickler=RobustPickler)), Column("logging", PickleType(pickler=RobustPickler)), Column("log_options", PickleType(pickler=RobustPickler)), @@ -168,6 +163,19 @@ def make_optimization_problem_table(database, if_exists="extend"): Column("cache_size", Integer), ] + if save_all_arguments: + columns += [ + Column("criterion", PickleType(pickler=RobustPickler)), + Column("criterion_kwargs", PickleType(pickler=RobustPickler)), + Column("constraints", PickleType(pickler=RobustPickler)), + Column("derivative", PickleType(pickler=RobustPickler)), + Column("derivative_kwargs", PickleType(pickler=RobustPickler)), + Column("criterion_and_derivative", PickleType(pickler=RobustPickler)), + Column( + "criterion_and_derivative_kwargs", PickleType(pickler=RobustPickler) + ), + ] + Table( table_name, database, *columns, extend_existing=True, sqlite_autoincrement=True ) diff --git a/estimagic/optimization/optimize.py b/estimagic/optimization/optimize.py index e2c2360f4..cd0250d4f 100644 --- a/estimagic/optimization/optimize.py +++ b/estimagic/optimization/optimize.py @@ -116,6 +116,9 @@ def maximize( of the criterion function (and gradient if applicable) takes more than 100 ms, the logging overhead is negligible. - "if_exists": (str) One of "extend", "replace", "raise" + - "save_all_arguments": (bool). If True, all arguments to maximize + that can be pickled are saved in the log file. Otherwise, only the + information needed by the dashboard is saved. Default False. error_handling (str): Either "raise" or "continue". Note that "continue" does not absolutely guarantee that no error is raised but we try to handle as many errors as possible in that case without aborting the optimization. @@ -245,6 +248,9 @@ def minimize( of the criterion function (and gradient if applicable) takes more than 100 ms, the logging overhead is negligible. - "if_exists": (str) One of "extend", "replace", "raise" + - "save_all_arguments": (bool). If True, all arguments to minimize + that can be pickled are saved in the log file. Otherwise, only the + information needed by the dashboard is saved. Default False. error_handling (str): Either "raise" or "continue". Note that "continue" does not absolutely guarantee that no error is raised but we try to handle as many errors as possible in that case without aborting the optimization. @@ -376,6 +382,9 @@ def optimize( of the criterion function (and gradient if applicable) takes more than 100 ms, the logging overhead is negligible. - "if_exists": (str) One of "extend", "replace", "raise" + - "save_all_arguments": (bool). If True, all arguments to + optimize that can be pickled are saved in the log file. Otherwise, only + the information needed by the dashboard is saved. Default False. error_handling (str): Either "raise" or "continue". Note that "continue" does not absolutely guarantee that no error is raised but we try to handle as many errors as possible in that case without aborting the optimization. @@ -690,6 +699,7 @@ def _create_and_initialize_database(logging, log_options, first_eval, problem_da path = logging fast_logging = log_options.get("fast_logging", False) if_exists = log_options.get("if_exists", "extend") + save_all_arguments = log_options.get("save_all_arguments", False) database = load_database(path=path, fast_logging=fast_logging) # create the optimization_iterations table @@ -706,8 +716,20 @@ def _create_and_initialize_database(logging, log_options, first_eval, problem_da ) # create_and_initialize the optimization_problem table - make_optimization_problem_table(database, if_exists) - + make_optimization_problem_table(database, if_exists, save_all_arguments) + if not save_all_arguments: + not_saved = [ + "criterion", + "criterion_kwargs", + "constraints", + "derivative", + "derivative_kwargs", + "criterion_and_derivative", + "criterion_and_derivative_kwargs", + ] + problem_data = { + key: val for key, val in problem_data.items() if key not in not_saved + } append_row(problem_data, "optimization_problem", database, path, fast_logging) return database diff --git a/estimagic/tests/inference/test_moment_covs.py b/estimagic/tests/inference/test_moment_covs.py index 6682a6675..783d7f0fd 100644 --- a/estimagic/tests/inference/test_moment_covs.py +++ b/estimagic/tests/inference/test_moment_covs.py @@ -23,7 +23,7 @@ def test_covariance_moments_random(): def test_covariance_moments_unit(): moment_cond = np.reshape(np.arange(12), (3, 4)) - control = np.full((4, 4), 32, dtype=np.float) / 3 + control = np.full((4, 4), 32, dtype=float) / 3 assert_array_almost_equal(_covariance_moments(moment_cond), control) diff --git a/estimagic/tests/optimization/test_all_algorithms_with_sum_of_squares.py b/estimagic/tests/optimization/test_all_algorithms_with_sum_of_squares.py index 967bf6bca..01b3f6d5f 100644 --- a/estimagic/tests/optimization/test_all_algorithms_with_sum_of_squares.py +++ b/estimagic/tests/optimization/test_all_algorithms_with_sum_of_squares.py @@ -208,6 +208,7 @@ def test_without_constraints(algo, direction, crit, deriv, crit_and_deriv): algorithm=algo, derivative=deriv, criterion_and_derivative=crit_and_deriv, + log_options={"save_all_arguments": False}, ) assert res["success"], f"{algo} did not converge." diff --git a/estimagic/tests/optimization/test_reparametrize.py b/estimagic/tests/optimization/test_reparametrize.py index 7c28b9caf..5e5aad96e 100644 --- a/estimagic/tests/optimization/test_reparametrize.py +++ b/estimagic/tests/optimization/test_reparametrize.py @@ -41,7 +41,7 @@ def reduce_params(params, constraints): all_locs = [] for constr in constraints: if "query" in constr: - all_locs = ["i", "j"] + all_locs = ["i", "j1", "j2"] elif isinstance(constr["loc"], tuple): all_locs.append(constr["loc"][0]) elif isinstance(constr["loc"], list): diff --git a/setup.cfg b/setup.cfg index 4a709e518..306a8ee54 100644 --- a/setup.cfg +++ b/setup.cfg @@ -1,5 +1,5 @@ [bumpversion] -current_version = 0.1.1 +current_version = 0.1.2 parse = (?P\d+)\.(?P\d+)(\.(?P\d+))(\-?((dev)?(?P\d+))?) serialize = {major}.{minor}.{patch}dev{dev} diff --git a/setup.py b/setup.py index 0dbc6f911..98fc1b8d3 100644 --- a/setup.py +++ b/setup.py @@ -3,7 +3,7 @@ setup( name="estimagic", - version="0.1.1", + version="0.1.2", description="Tools for the estimation of (structural) econometric models.", long_description=""" Estimagic is a Python package that helps to build high-quality and user diff --git a/tox.ini b/tox.ini index cb5ed32d3..e346eedb5 100644 --- a/tox.ini +++ b/tox.ini @@ -22,7 +22,7 @@ conda_deps = cloudpickle numpy petsc4py >= 3.11 - pandas <= 1.0.5 + pandas pytest pytest-cov pytest-mock @@ -56,7 +56,7 @@ conda_deps = commands = # Add W flag to builds so that warnings become errors. sphinx-build -nT -b html -d {envtmpdir}/doctrees . {envtmpdir}/html - # sphinx-build -nT -b linkcheck -d {envtmpdir}/doctrees . {envtmpdir}/linkcheck + sphinx-build -nT -b linkcheck -d {envtmpdir}/doctrees . {envtmpdir}/linkcheck [doc8] ignore =