gpt4 book ai didi

python - Sagemaker 脚本模式训练 : How to import custom modules in training script?

转载 作者:行者123 更新时间:2023-12-05 06:09:30 25 4
gpt4 key购买 nike

我正在尝试使用 SageMaker 脚本模式在图像数据上训练模型。我有多个用于数据准备、模型创建和训练的脚本。这是我的工作目录的内容:

WORKDIR
|-- config
| |-- hyperparameters.json
| |-- lossweights.json
| `-- lr.json
|-- dataset.py
|-- densenet.py
|-- resnet.py
|-- models.py
|-- train.py
|-- imagenet_utils.py
|-- keras_utils.py
|-- utils.py
`-- train.ipynb

训练脚本是 train.py,它使用了其他脚本。为了运行训练脚本,我使用了以下代码:

bucket='ashutosh-sagemaker'
data_key = 'training'
data_location = 's3://{}/{}'.format(bucket, data_key)
print(data_location)
inputs = {'data':data_location}
print(inputs)

from sagemaker.tensorflow import TensorFlow

estimator = TensorFlow(entry_point='train.py',
role=role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
framework_version='1.14',
py_version='py3',
script_mode=True,
hyperparameters={
'epochs': 10
}
)

estimator.fit(inputs)

运行此代码时,我得到以下输出:

2020-11-09 10:42:07 Starting - Starting the training job...
2020-11-09 10:42:10 Starting - Launching requested ML instances......
2020-11-09 10:43:24 Starting - Preparing the instances for training.........
2020-11-09 10:44:43 Downloading - Downloading input data....................................
2020-11-09 10:51:08 Training - Downloading the training image...
2020-11-09 10:51:40 Uploading - Uploading generated training model

Traceback (most recent call last):
File "train.py", line 5, in <module>
from dataset import WatchDataSet
ModuleNotFoundError: No module named 'dataset'
WARNING: Logging before flag parsing goes to stderr.
E1109 10:51:37.525632 140519531874048 _trainer.py:94] ExecuteUserScriptError:
Command "/usr/local/bin/python3.6 train.py --epochs 10 --model_dir s3://sagemaker-ap-northeast-1-485707876195/tensorflow-training-2020-11-09-10-42-06-234/model"

2020-11-09 10:51:47 Failed - Training job failed

我应该怎么做才能删除 ModuleNotFoundError?我试图寻找解决方案,但没有找到任何相关资源。

train.py 文件的内容:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from dataset import WatchDataSet
from models import BCNN
from utils import image_generator, val_image_generator
from utils import BCNNScheduler, LossWeightsModifier
from utils import restore_checkpoint, get_epoch_key

import argparse
from collections import defaultdict
import json
import keras
from keras import backend as K
from keras import optimizers
from keras.backend import tensorflow_backend
from keras.callbacks import LearningRateScheduler, TensorBoard
from math import ceil
import numpy as np
import os
import glob
from sklearn.model_selection import train_test_split

parser = argparse.ArgumentParser()
parser.add_argument('--epochs', type=int, default=100, help='number of epoch of training')
parser.add_argument('--batch_size', type=int, default=32, help='size of the batches')
parser.add_argument('--data', type=str, default=os.environ.get('SM_CHANNEL_DATA'))

opt = parser.parse_args()

def main():

csv_config_dict = {
'csv': opt.data + 'train.csv',
'image_dir': opt.data + 'images',
'xlabel_column': opt.xlabel_column,
'brand_column': opt.brand_column,
'model_column': opt.model_column,
'ref_column': opt.ref_column,
'encording': opt.encoding
}

dataset = WatchDataSet(
csv_config_dict=csv_config_dict,
min_data_ref=opt.min_data_ref
)

X, y_c1, y_c2, y_fine = dataset.X, dataset.y_c1, dataset.y_c2, dataset.y_fine
brand_uniq, model_uniq, ref_uniq = dataset.brand_uniq, dataset.model_uniq, dataset.ref_uniq

print("ref. shape: ", y_fine.shape)
print("brand shape: ", y_c1.shape)
print("model shape: ", y_c2.shape)

height, width = 224, 224
channel = 3

# get pre-trained weights
if opt.mode == 'dense':
WEIGHTS_PATH = 'https://github.com/keras-team/keras-applications/releases/download/densenet/densenet121_weights_tf_dim_ordering_tf_kernels.h5'
elif opt.mode == 'res':
WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5'
weights_path, current_epoch, checkpoint = restore_checkpoint(opt.ckpt_path, WEIGHTS_PATH)

# split train/validation
y_ref_list = np.array([ref_uniq[np.argmax(i)] for i in y_fine])
index_list = np.array(range(len(X)))
train_index, test_index, _, _ = train_test_split(index_list, y_ref_list, train_size=0.8, random_state=23, stratify=None)

print("Train")
model = None
bcnn = BCNN(
height=height,
width=width,
channel=channel,
num_classes=len(ref_uniq),
coarse1_classes=len(brand_uniq),
coarse2_classes=len(model_uniq),
mode=opt.mode
)

if __name__ == '__main__':
main()

最佳答案

如果您不介意从 TF 1.14 切换到 TF 1.15.2+,您将能够通过参数 source_dir 将包含您的自定义模块的本地代码目录带到您的 SageMaker TensorFlow Estimator >。您的入口点脚本应位于该 source_dir 中。 SageMaker TensorFlow 文档中的详细信息:https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/using_tf.html#use-third-party-libraries

关于python - Sagemaker 脚本模式训练 : How to import custom modules in training script?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64754032/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com