gpt4 book ai didi

Matlab 到 Julia 优化 : Function in JuMP @SetNLObjective

转载 作者:太空宇宙 更新时间:2023-11-03 19:35:19 24 4
gpt4 key购买 nike

我正在尝试在 Julia 中重写 Matlab fmincon 优化函数。

这是 Matlab 代码:

function [x,fval] = example3()

x0 = [0; 0; 0; 0; 0; 0; 0; 0];
A = [];
b = [];
Ae = [1000 1000 1000 1000 -1000 -1000 -1000 -1000];
be = [100];
lb = [0; 0; 0; 0; 0; 0; 0; 0];
ub = [1; 1; 1; 1; 1; 1; 1; 1];
noncon = [];

options = optimset('fmincon');
options.Algorithm = 'interior-point';

[x,fval] = fmincon(@objfcn,x0,A,b,Ae,be,lb,ub,@noncon,options);

end

function f = objfcn(x)

% user inputs
Cr = [ 0.0064 0.00408 0.00192 0;
0.00408 0.0289 0.0204 0.0119;
0.00192 0.0204 0.0576 0.0336;
0 0.0119 0.0336 0.1225 ];

w0 = [ 0.3; 0.3; 0.2; 0.1 ];
Er = [0.05; 0.1; 0.12; 0.18];

% calculate objective function
w = w0+x(1:4)-x(5:8);
Er_p = w'*Er;
Sr_p = sqrt(w'*Cr*w);

% f = objective function
f = -Er_p/Sr_p;

end

这是我的 Julia 代码:

using JuMP
using Ipopt

m = Model(solver=IpoptSolver())

# INPUT DATA
w0 = [ 0.3; 0.3; 0.2; 0.1 ]
Er = [0.05; 0.1; 0.12; 0.18]
Cr = [ 0.0064 0.00408 0.00192 0;
0.00408 0.0289 0.0204 0.0119;
0.00192 0.0204 0.0576 0.0336;
0 0.0119 0.0336 0.1225 ]

# VARIABLES
@defVar(m, 0 <= x[i=1:8] <= 1, start = 0.0)
@defNLExpr(w, w0+x[1:4]-x[5:8])
@defNLExpr(Er_p, w'*Er)
@defNLExpr(Sr_p, w'*Cr*w)
@defNLExpr(f, Er_p/Sr_p)

# OBJECTIVE
@setNLObjective(m, Min, f)

# CONSTRAINTS
@addConstraint(m, 1000*x[1] + 1000*x[2] + 1000*x[3] + 1000*x[4] -
1000*x[5] - 1000*x[6] - 1000*x[7] - 1000*x[8] == 100)

# SOLVE
status = solve(m)

# DISPLAY RESULTS
println("x = ", round(getValue(x),4))
println("f = ", round(getObjectiveValue(m),4))

当我在 @setNLObjective 中明确定义目标函数时,Julia 优化会起作用,但是这是不合适的,因为用户的输入可能会发生变化,从而导致不同的目标函数,您可以从目标函数的创建方式中看出这一点。

问题似乎是 JuMP 在如何在 @setNLObjective 参数中输入目标函数方面的限制:

All expressions must be simple scalar operations. You cannot use dot, matrix-vector products, vector slices, etc. Translate vector operations into explicit sum{} operations.

有解决办法吗?或者 Julia 中是否有任何其他包可以解决这个问题,请记住我不会有 jacobian 或 hessian。

非常感谢。

最佳答案

使用 Julia 和 NLopt 优化包的 Matlab 代码的工作示例。

using NLopt

function objective_function(x::Vector{Float64}, grad::Vector{Float64})

w0 = [ 0.3; 0.3; 0.2; 0.1 ]
Er = [0.05; 0.1; 0.12; 0.18]
Cr = [ 0.0064 0.00408 0.00192 0;
0.00408 0.0289 0.0204 0.0119;
0.00192 0.0204 0.0576 0.0336;
0 0.0119 0.0336 0.1225 ]

w = w0 + x[1:4] - x[5:8]

Er_p = w' * Er

Sr_p = sqrt(w' * Cr * w)

f = -Er_p/Sr_p

obj_func_value = f[1]

return(obj_func_value)
end

function constraint_function(x::Vector, grad::Vector)

constraintValue = 1000*x[1] + 1000*x[2] + 1000*x[3] + 1000*x[4] -
1000*x[5] - 1000*x[6] - 1000*x[7] - 1000*x[8] - 100

return constraintValue
end

opt1 = Opt(:LN_COBYLA, 8)

lower_bounds!(opt1, [0, 0, 0, 0, 0, 0, 0, 0])
upper_bounds!(opt1, [1, 1, 1, 1, 1, 1, 1, 1])

#ftol_rel!(opt1, 0.5)
#ftol_abs!(opt1, 0.5)

min_objective!(opt1, objective_function)
equality_constraint!(opt1, constraint_function)

(fObjOpt, xOpt, flag) = optimize(opt1, [0, 0, 0, 0, 0, 0, 0, 0])

关于Matlab 到 Julia 优化 : Function in JuMP @SetNLObjective,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34898184/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com