gpt4 book ai didi

python - 使用 Tensorflow 数据集时如何在 decode_csv 中声明分类列?

转载 作者:太空宇宙 更新时间:2023-11-03 15:47:39 25 4
gpt4 key购买 nike

我正在尝试使用 Tensorflow 数据集 API 从 csv 文件中读取列。

我首先列出我的列的名称以及我有多少列:

numerical_feature_names = ["N1", "N2"]
categorical_feature_names = ["C3", "C4", "C5"]
amount_of_columns_csv = 5

然后我声明我的列类型:

feature_columns = [tf.feature_column.numeric_column(k) for k in numerical_feature_names]

for k in categorical_feature_names:
current_categorical_column = tf.feature_column.categorical_column_with_hash_bucket(
key=k,
hash_bucket_size=40)

feature_columns.append(tf.feature_column.indicator_column(current_categorical_column))

最后是我的输入函数:

def my_input_fn(file_path, perform_shuffle=False, repeat_count=1):
def decode_csv(line):
parsed_line = tf.decode_csv(line, [[0.]]*amount_of_columns_csv, field_delim=';', na_value='-1')
d = dict(zip(feature_names, parsed_line)), label
return d

dataset = (tf.data.TextLineDataset(file_path) # Read text file
.skip(1) # Skip header row
.map(decode_csv)) # Transform each elem by applying decode_csv fn
if perform_shuffle:
# Randomizes input using a window of 512 elements (read into memory)
dataset = dataset.shuffle(buffer_size=BATCH_SIZE)
dataset = dataset.repeat(repeat_count) # Repeats dataset this # times
dataset = dataset.batch(BATCH_SIZE) # Batch size to use
iterator = dataset.make_one_shot_iterator()
batch_features, batch_labels = iterator.get_next()
return batch_features, batch_labels

我应该如何在 decode_csv 调用中声明我的 record_defaults 参数?目前我只使用 [[0.]]

捕获数字列

如果我有数千列,其中混合了数字列和分类列,我如何才能避免在 decode_csv 函数中手动声明结构?

最佳答案

我没有尝试直接在 Tensorflow 中加载 csv,而是首先将其加载到 Pandas 数据框中,遍历列 dtype 并设置我的类型数组,以便我可以在 Tensorflow 输入函数中重用它,代码如下:

CSV_COLUMN_NAMES = pd.read_csv(FILE_TRAIN, nrows=1).columns.tolist()
train = pd.read_csv(FILE_TRAIN, names=CSV_COLUMN_NAMES, header=0)
train_x, train_y = train, train.pop('labels')

test = pd.read_csv(FILE_TEST, names=CSV_COLUMN_NAMES, header=0)
test_x, test_y = test, test.pop('labels')

# iterate over the columns type to create my column array
for column in train.columns:
print (train[column].dtype)
if(train[column].dtype == np.float64 or train[column].dtype == np.int64):
numerical_feature_names.append(column)
else:
categorical_feature_names.append(column)

feature_columns = [tf.feature_column.numeric_column(k) for k in numerical_feature_names]

# here an example of how you could process categorical columns
for k in categorical_feature_names:
current_bucket = train[k].nunique()
if current_bucket>10:
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(key=k, vocabulary_list=train[k].unique())
)
)
else:
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_hash_bucket(key=k, hash_bucket_size=current_bucket)
)
)

最后是输入函数

# input_fn for training, convertion of dataframe to dataset
def train_input_fn(features, labels, batch_size, repeat_count):
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
dataset = dataset.shuffle(256).repeat(repeat_count).batch(batch_size)
return dataset

关于python - 使用 Tensorflow 数据集时如何在 decode_csv 中声明分类列?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49026632/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com