gpt4 book ai didi

python - Metropolis-Hasting算法求解高斯积分的实现

转载 作者:行者123 更新时间:2023-12-05 05:47:38 34 4
gpt4 key购买 nike

我目前在实现 Metropolis-Hastings 算法时遇到问题。

我正在尝试使用该算法来计算形式的积分

formula

在使用这个算法时,我们可以获得一长串配置(在这种情况下,每个配置只是一个数字),这样在链的尾端,具有特定配置的概率如下(或者更确切地说倾向于)高斯分布。

我的代码似乎无法获取上述高斯分布。对转换概率有一种奇怪的依赖性(根据链中的先前配置选择新候选配置的概率)。但是,如果此转换概率是对称的,则根本不应该依赖于此函数(它只影响探索相空间 [潜在配置空间] 的速度以及链收敛到所需分布的速度)!

在我的例子中,我使用的是宽度为 d 的正态分布转换函数(满足对称的需要)。对于我使用的每个 d,我确实得到了一个高斯分布,但是标准偏差 sigma 取决于我对 d 的选择。生成的高斯应该有大约 0.701 的西格玛,但我发现我实际得到的值取决于参数 d,而实际上它不应该。

我不确定此代码中的错误在哪里,将不胜感激任何帮助!

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm


'''
We want to get an exponential decay integral approx using importance sampling.
We will try to integrate x^2exp(-x^2) over the real line.
Metropolis-hasting alg will generate configuartions (in this case, single numbers) such that
the probablity of a given configuration x^a ~ p(x^a) for p(x) propto exp(-x^2).

Once configs = {x^a} generated, the apporximation, Q_N, of the integral, I, will be given by
Q_N = 1/N sum_(configs) x^2
lim (N-> inf) Q_N -> I
'''

'''
Implementing metropolis-hasting algorithm
'''

#Setting up the initial config for our chain, generating first 2 to generate numpy array
x_0 = np.random.uniform(-20,-10,2)

#Defining function that generates the next N steps in the chain, given a starting config x
#Works by iteratively taking the last element in the chain, generating a new candidate configuration from it and accepting/rejecting according to the algorithm
#Success and failures implemented to see roughly the success rate of each step
def next_steps(x,N):
i = 0
Success = 0
Failures = 0
Data = np.array(x)
d = 1.5 #Spread of (normal) transition function
while i < N:
r = np.random.uniform(0,1)
delta = np.random.normal(0,d)
x_old = Data[-1]
x_new = x_old + delta
hasting_ratio = np.exp(-(x_new**2-x_old**2) )
if hasting_ratio > r:
i = i+1
Data = np.append(Data,x_new)
Success = Success +1
else:
Failures = Failures + 1
print(Success)
print(Failures)
return Data

#Number of steps in the chain
N_iteration = 50000

#Generating the data
Data = next_steps(x_0,N_iteration)

#Plotting data to see convergence of chain to gaussian distribution
plt.plot(Data)
plt.show()

#Obtaining tail end data and obtaining the standard deviation of resulting gaussian distribution
Data = Data[-40000:]
(mu, sigma) = norm.fit(Data)
print(sigma)


#Plotting a histogram to visually see if guassian
plt.hist(Data, bins = 300)
plt.show()

最佳答案

即使 x 没有变化,您也需要保存它。否则,中心值计数不足,并且随着 d 的增加,方差会增加。

import numpy as np
from scipy.stats import norm


"""
We want to get an exponential decay integral approx using importance sampling.
We will try to integrate x^2exp(-x^2) over the real line.
Metropolis-hasting alg will generate configuartions (in this case, single numbers) such that
the probablity of a given configuration x^a ~ p(x^a) for p(x) propto exp(-x^2).

Once configs = {x^a} generated, the apporximation, Q_N, of the integral, I, will be given by
Q_N = 1/N sum_(configs) x^2
lim (N-> inf) Q_N -> I
"""

"""
Implementing metropolis-hasting algorithm
"""

# Setting up the initial config for our chain
x_0 = np.random.uniform(-20, -10)

# Defining function that generates the next N steps in the chain, given a starting config x
# Works by iteratively taking the last element in the chain, generating a new candidate configuration from it and accepting/rejecting according to the algorithm
# Success and failures implemented to see roughly the success rate of each step
def next_steps(x, N):
Success = 0
Failures = 0
Data = np.empty((N,))
d = 1.5 # Spread of (normal) transition function
for i in range(N):
r = np.random.uniform(0, 1)
delta = np.random.normal(0, d)
x_new = x + delta
hasting_ratio = np.exp(-(x_new ** 2 - x ** 2))
if hasting_ratio > r:
x = x_new
Success = Success + 1
else:
Failures = Failures + 1
Data[i] = x
print(Success)
print(Failures)
return Data


# Number of steps in the chain
N_iteration = 50000

# Generating the data
Data = next_steps(x_0, N_iteration)

# Obtaining tail end data and obtaining the standard deviation of resulting gaussian distribution
Data = Data[-40000:]
(mu, sigma) = norm.fit(Data)
print(sigma)

关于python - Metropolis-Hasting算法求解高斯积分的实现,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/70958350/

34 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com