gpt4 book ai didi

multi-level - 如何在 OpenMDAO 1.x 中使用嵌套问题?

转载 作者:行者123 更新时间:2023-12-02 01:28:48 27 4
gpt4 key购买 nike

我正在尝试在 OpenMDAO 上实现协作优化和其他多层次架构。我读到 here,这可以通过在问题的子类中定义一个单独的 solve_nonlinear 方法来完成。

问题是在运行问题实例时,未调用定义的 solve_linear。这是代码-

from __future__ import print_function, division
import numpy as np
import time

from openmdao.api import Component,Group, IndepVarComp, ExecComp,\
Problem, ScipyOptimizer, NLGaussSeidel, ScipyGMRES


class SellarDis1(Component):
"""Component containing Discipline 1."""

def __init__(self):
super(SellarDis1, self).__init__()

self.add_param('z', val=np.zeros(2))
self.add_param('x', val=0.0)
self.add_param('y2', val=1.0)

self.add_output('y1', val=1.0)

def solve_nonlinear(self, params, unknowns, resids):
y1 = z1**2 + z2 + x1 - 0.2*y2"""

z1 = params['z'][0]
z2 = params['z'][1]
x1 = params['x']
y2 = params['y2']

unknowns['y1'] = z1**2 + z2 + x1 - 0.2*y2

def linearize(self, params, unknowns, resids):
J = {}

J['y1','y2'] = -0.2
J['y1','z'] = np.array([[2*params['z'][0], 1.0]])
J['y1','x'] = 1.0

return J

class SellarDis2(Component):

def __init__(self):
super(SellarDis2, self).__init__()

self.add_param('z', val=np.zeros(2))
self.add_param('y1', val=1.0)

self.add_output('y2', val=1.0)

def solve_nonlinear(self, params, unknowns, resids):

z1 = params['z'][0]
z2 = params['z'][1]
y1 = params['y1']
y1 = abs(y1)

unknowns['y2'] = y1**.5 + z1 + z2

def linearize(self, params, unknowns, resids):
J = {}

J['y2', 'y1'] = 0.5*params['y1']**-0.5
J['y2', 'z'] = np.array([[1.0, 1.0]])

return J

class Sellar(Group):

def __init__(self):
super(Sellar, self).__init__()

self.add('px', IndepVarComp('x', 1.0), promotes=['*'])
self.add('pz', IndepVarComp('z', np.array([5.0,2.0])), promotes=['*'])

self.add('d1', SellarDis1(), promotes=['*'])
self.add('d2', SellarDis2(), promotes=['*'])

self.add('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0, y1=0.0, y2=0.0),
promotes=['*'])

self.add('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['*'])
self.add('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['*'])

self.nl_solver = NLGaussSeidel()
self.nl_solver.options['atol'] = 1.0e-12

self.ln_solver = ScipyGMRES()

def solve_nonlinear(self, params=None, unknowns=None, resids=None, metadata=None):

print("Group's solve_nonlinear was called!!")
# Discipline Optimizer would be called here?
super(Sellar, self).solve_nonlinear(params, unknowns, resids)


class ModifiedProblem(Problem):

def solve_nonlinear(self, params, unknowns, resids):

print("Problem's solve_nonlinear was called!!")
# or here ?
super(ModifiedProblem, self).solve_nonlinear()


top = ModifiedProblem()
top.root = Sellar()

top.driver = ScipyOptimizer()
top.driver.options['optimizer'] = 'SLSQP'

top.driver.add_desvar('z', lower=np.array([-10.0, 0.0]),
upper=np.array([10.0, 10.0]))
top.driver.add_desvar('x', lower=0., upper=10.0)
top.driver.add_objective('obj')
top.driver.add_constraint('con1', upper=0.0)
top.driver.add_constraint('con2', upper=0.0)


top.setup(check=False)
top.run()

以上代码的输出是-

Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Optimization terminated successfully. (Exit mode 0)
Current function value: [ 3.18339395]
Iterations: 6
Function evaluations: 6
Gradient evaluations: 6
Optimization Complete
-----------------------------------

这意味着任何时候都不会调用 Problem 子类中定义的 solve_nonlinear。那么,我应该在 Group 的子类中调用学科优化器吗?

此外,我如何在两个优化问题(系统和学科)之间传递目标变量,特别是将各个学科的优化全局变量返回给系统优化器。

感谢大家。

最佳答案

你是对的,Problem 上的 solve_nonlinear 永远不会被调用,因为 Problem 不是 OpenMDAO 组件并且没有 solve_nonlinear 方法。为了在另一个问题中运行子模型问题,您要做的是将其封装在一个组件实例中。它看起来像这样:

class SubOptimization(Component)

def __init__(self):
super(SubOptimization, self).__init__()

# Inputs to this subprob
self.add_param('z', val=np.zeros(2))
self.add_param('x', val=0.0)
self.add_param('y2', val=1.0)

# Unknowns for this sub prob
self.add_output('y1', val=1.0)

self.problem = prob = Problem()
prob.root = Group()
prob.add('px', IndepVarComp('x', 1.0), promotes=['*'])
prob.add('d1', SellarDis1(), promotes=['*'])

# TODO - add cons/objs for sub prob

prob.driver = ScipyOptimizer()
prob.driver.options['optimizer'] = 'SLSQP'

prob.driver.add_desvar('x', lower=0., upper=10.0)
prob.driver.add_objective('obj')
prob.driver.add_constraint('con1', upper=0.0)
prob.driver.add_constraint('con2', upper=0.0)

prob.setup()

# Must finite difference across optimizer
self.fd_options['force_fd'] = True

def solve_nonlinear(self, params, unknowns, resids):

prob = self.problem

# Pass values into our problem
prob['x'] = params['x']
prob['z'] = params['z']
prob['y2'] = params['y2']

# Run problem
prob.run()

# Pull values from problem
unknowns['y1'] = prob['y1']

您可以将此组件放入您的主要问题中(连同一个用于规程 2 的组件,尽管 2 实际上不需要次级优化,因为它没有局部设计变量)并围绕它优化全局设计变量。

一个警告:这不是我尝试过的(我也没有测试上面不完整的代码片段),但它应该能让您走上正轨。您可能会遇到一个错误,因为这并没有真正经过太多测试。当我有时间的时候,我会为 OpenMDAO 测试整理一个像这样的 CO 测试,这样我们就安全了。

关于multi-level - 如何在 OpenMDAO 1.x 中使用嵌套问题?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35287786/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com