gpt4 book ai didi

r - 这些数据集之间使用 R 进行深度学习的 MAE 结果存在差异的原因是什么?

转载 作者:行者123 更新时间:2023-11-30 08:48:02 25 4
gpt4 key购买 nike

我正在尝试使用来自其他来源的相同波士顿住房数据集复制下面的深度学习示例。

https://jjallaire.github.io/deep--with-r-notebooks/notebooks/3.6-predicting-house-prices.nb.html

原来的数据源是:

library(keras) dataset <- dataset_boston_housing()

或者我尝试使用:

library(mlbench)
data(BostonHousing)

数据集之间的差异是:

  1. mlbench 中的数据集包含列名称。
  2. 来自 keras 的数据集已分为测试数据集和训练数据集。
  3. 来自 keras 的数据集是用包含矩阵的列表来组织的,而来自 mlbench 的数据集是一个数据框
  4. 第四列包含一个分类变量“chas”,无法从 mlbench 数据集进行预处理,但可以从 keras 数据集进行预处理。为了比较苹果和苹果,我从两个数据集中删除了这一列。

为了比较这两个数据集,我将 keras 中的训练集和测试集合并到 1 个数据集中。之后,我将 keras 与 mlbench 的合并数据集与 summary() 进行了比较,这些数据集对于每个特征(最小值、最大值、中值、平均值)都是相同的。

由于 keras 的数据集已分为测试和训练 (80-20),因此我只能使用一个训练集进行深度学习过程。该训练集的 validation_mae 约为 2.5。参见此图:

enter image description here

如果我将 mlbench 中的数据以 0.8 进行分区来构建大小相似的训练集,运行深度学习代码并执行此操作多次,我永远不会达到 2.5 左右的validation_mae。范围在 4 到 6 之间。输出示例如下图:

enter image description here

有人知道造成这种差异的原因是什么吗?

使用 keras 数据集的代码:


library(keras)
dataset <- dataset_boston_housing()

c(c(train_data, train_targets), c(test_data, test_targets)) %<-% dataset

train_data <- train_data[,-4]
test_data <- test_data[,-4]

mean <- apply(train_data, 2, mean)
std <- apply(train_data, 2, sd)
train_data <- scale(train_data, center = mean, scale = std)
test_data <- scale(test_data, center = mean, scale = std)

# After this line the code is the same for both code examples.
# =========================================

# Because we will need to instantiate the same model multiple times,
# we use a function to construct it.
build_model <- function() {
model <- keras_model_sequential() %>%
layer_dense(units = 64, activation = "relu",
input_shape = dim(train_data)[[2]]) %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 1)

model %>% compile(
optimizer = "rmsprop",
loss = "mse",
metrics = c("mae")
)
}

k <- 4
indices <- sample(1:nrow(train_data))
folds <- cut(1:length(indices), breaks = k, labels = FALSE)
num_epochs <- 100
all_scores <- c()
for (i in 1:k) {
cat("processing fold #", i, "\n")
# Prepare the validation data: data from partition # k
val_indices <- which(folds == i, arr.ind = TRUE)
val_data <- train_data[val_indices,]
val_targets <- train_targets[val_indices]

# Prepare the training data: data from all other partitions
partial_train_data <- train_data[-val_indices,]
partial_train_targets <- train_targets[-val_indices]

# Build the Keras model (already compiled)
model <- build_model()

# Train the model (in silent mode, verbose=0)
model %>% fit(partial_train_data, partial_train_targets,
epochs = num_epochs, batch_size = 1, verbose = 0)

# Evaluate the model on the validation data
results <- model %>% evaluate(val_data, val_targets, verbose = 0)
all_scores <- c(all_scores, results$mean_absolute_error)
}
all_scores
mean(all_scores)

# Some memory clean-up
k_clear_session()
num_epochs <- 500
all_mae_histories <- NULL
for (i in 1:k) {
cat("processing fold #", i, "\n")

# Prepare the validation data: data from partition # k
val_indices <- which(folds == i, arr.ind = TRUE)
val_data <- train_data[val_indices,]
val_targets <- train_targets[val_indices]

# Prepare the training data: data from all other partitions
partial_train_data <- train_data[-val_indices,]
partial_train_targets <- train_targets[-val_indices]

# Build the Keras model (already compiled)
model <- build_model()

# Train the model (in silent mode, verbose=0)
history <- model %>% fit(
partial_train_data, partial_train_targets,
validation_data = list(val_data, val_targets),
epochs = num_epochs, batch_size = 1, verbose = 1
)
mae_history <- history$metrics$val_mean_absolute_error
all_mae_histories <- rbind(all_mae_histories, mae_history)
}


average_mae_history <- data.frame(
epoch = seq(1:ncol(all_mae_histories)),
validation_mae = apply(all_mae_histories, 2, mean)
)


library(ggplot2)
ggplot(average_mae_history, aes(x = epoch, y = validation_mae)) + geom_line()

使用来自mlbench的数据集的代码(在“=====”行之后,代码与上面的代码相同:


library(dplyr)
library(mlbench)
library(groupdata2)

data(BostonHousing)

parts <- partition(BostonHousing, p = 0.2)
test_data <- parts[[1]]
train_data <- parts[[2]]


train_targets <- train_data$medv
test_targets <- test_data$medv

train_data$medv <- NULL
test_data$medv <- NULL


train_data$chas <- NULL
test_data$chas <- NULL

mean <- apply(train_data, 2, mean)
std <- apply(train_data, 2, sd)
train_data <- scale(train_data, center = mean, scale = std)
test_data <- scale(test_data, center = mean, scale = std)

library(keras)

# After this line the code is the same for both code examples.
# =========================================

build_model <- function() {
model <- keras_model_sequential() %>%
layer_dense(units = 64, activation = "relu",
input_shape = dim(train_data)[[2]]) %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 1)

model %>% compile(
optimizer = "rmsprop",
loss = "mse",
metrics = c("mae")
)
}

k <- 4
indices <- sample(1:nrow(train_data))
folds <- cut(1:length(indices), breaks = k, labels = FALSE)
num_epochs <- 100
all_scores <- c()
for (i in 1:k) {
cat("processing fold #", i, "\n")
# Prepare the validation data: data from partition # k
val_indices <- which(folds == i, arr.ind = TRUE)
val_data <- train_data[val_indices,]
val_targets <- train_targets[val_indices]

# Prepare the training data: data from all other partitions
partial_train_data <- train_data[-val_indices,]
partial_train_targets <- train_targets[-val_indices]

# Build the Keras model (already compiled)
model <- build_model()

# Train the model (in silent mode, verbose=0)
model %>% fit(partial_train_data, partial_train_targets,
epochs = num_epochs, batch_size = 1, verbose = 0)

# Evaluate the model on the validation data
results <- model %>% evaluate(val_data, val_targets, verbose = 0)
all_scores <- c(all_scores, results$mean_absolute_error)
}
all_scores
mean(all_scores)

# Some memory clean-up
k_clear_session()
num_epochs <- 500
all_mae_histories <- NULL
for (i in 1:k) {
cat("processing fold #", i, "\n")

# Prepare the validation data: data from partition # k
val_indices <- which(folds == i, arr.ind = TRUE)
val_data <- train_data[val_indices,]
val_targets <- train_targets[val_indices]

# Prepare the training data: data from all other partitions
partial_train_data <- train_data[-val_indices,]
partial_train_targets <- train_targets[-val_indices]

# Build the Keras model (already compiled)
model <- build_model()

# Train the model (in silent mode, verbose=0)
history <- model %>% fit(
partial_train_data, partial_train_targets,
validation_data = list(val_data, val_targets),
epochs = num_epochs, batch_size = 1, verbose = 1
)
mae_history <- history$metrics$val_mean_absolute_error
all_mae_histories <- rbind(all_mae_histories, mae_history)
}


average_mae_history <- data.frame(
epoch = seq(1:ncol(all_mae_histories)),
validation_mae = apply(all_mae_histories, 2, mean)
)


library(ggplot2)
ggplot(average_mae_history, aes(x = epoch, y = validation_mae)) + geom_line()

谢谢!

最佳答案

写在这里是因为我无法发表评论...我检查了mlbench数据集here它说,它包含原始波士顿数据集的 14 列和 5 个附加列。不确定您的数据集是否有错误,因为您声明数据集的列数没有差异。

另一种猜测可能是,第二个示例图来自陷入局部最小值的模型。为了获得更具可比性的模型,您可能需要使用相同的种子,以确保权重等的初始化相同,以获得相同的结果。

希望这对您有所帮助,欢迎随时询问。

关于r - 这些数据集之间使用 R 进行深度学习的 MAE 结果存在差异的原因是什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58051377/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com