gpt4 book ai didi

python - 类型错误 : setup() got an unexpected keyword argument 'stage'

转载 作者:行者123 更新时间:2023-12-05 08:45:40 25 4
gpt4 key购买 nike

我正在尝试通过 pytorch_lightning 训练我的问答模型。但是,在运行命令 trainer.fit(model,data_module) 时,出现以下错误:

---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-72-b9cdaa88efa7> in <module>()
----> 1 trainer.fit(model,data_module)

4 frames
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in _call_setup_hook(self)
1488
1489 if self.datamodule is not None:
-> 1490 self.datamodule.setup(stage=fn)
1491 self._call_callback_hooks("setup", stage=fn)
1492 self._call_lightning_module_hook("setup", stage=fn)

TypeError: setup() got an unexpected keyword argument 'stage'

我已经安装并导入了 pytorch_lightning。

我还定义了 data_module = BioQADataModule(train_df, val_df, tokenizer, batch_size = BATCH_SIZE),其中 BATCH_SIZE = 2,N_EPOCHS = 6。

我用过的模型如下:-

model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)

此外,我已经为模型定义了如下类:-

    class BioQAModel(pl.LightningModule):

def __init__(self):
super().__init__()
self.model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)

def forward(self, input_ids, attention_mask, labels=None):
output = self.model(
input_ids = encoding["input_ids"],
attention_mask = encoding["attention_mask"],
labels=labels
)

return output.loss, output.logits

def training_step(self, batch, batch_idx):
input_ids = batch["input_ids"]
attention_mask = batch["attention_mask"]
labels = batch["labels"]
loss, outputs = self(input_ids, attention_mask, labels)
self.log("train_loss", loss, prog_bar=True, logger=True)
return loss

def validation_step(self, batch, batch_idx):
input_ids = batch["input_ids"]
attention_mask = batch["attention_mask"]
labels = batch["labels"]
loss, outputs = self(input_ids, attention_mask, labels)
self.log("val_loss", loss, prog_bar=True, logger=True)
return loss

def test_step(self, batch, batch_idx):
input_ids = batch["input_ids"]
attention_mask = batch["attention_mask"]
labels = batch["labels"]
loss, outputs = self(input_ids, attention_mask, labels)
self.log("test_loss", loss, prog_bar=True, logger=True)
return loss

def configure_optimizers(self):
return AdamW(self.parameters(), lr=0.0001)

如需任何其他信息,请说明。

编辑 1:添加 BioQADataModule:

class BioQADataModule(pl.LightningDataModule):

def __init__(
self,
train_df: pd.DataFrame,
test_df: pd.DataFrame,
tokenizer: T5Tokenizer,
batch_size: int = 8,
source_max_token_len = 396,
target_max_token_len = 32
):
super().__init__()
self.batch_size = batch_size
self.train_df = train_df
self.test_df = test_df
self.tokenizer = tokenizer
self.source_max_token_len = source_max_token_len
self.target_max_token_len = target_max_token_len

def setup(self):
self.train_dataset = BioQADataset(
self.train_df,
self.tokenizer,
self.source_max_token_len,
self.target_max_token_len
)

self.test_dataset = BioQADataset(
self.test_df,
self.tokenizer,
self.source_max_token_len,
self.target_max_token_len
)

def train_dataloader(self):
return DataLoader(
self.train_dataset,
batch_size = self.batch_size,
shuffle = True,
num_workers = 4
)

def val_dataloader(self):
return DataLoader(
self.train_dataset,
batch_size = 1,
shuffle = True,
num_workers = 4
)

def test_dataloader(self):
return DataLoader(
self.train_dataset,
batch_size = 1,
shuffle = True,
num_workers = 4
)

最佳答案

您需要在设置方法中添加一个额外的参数 stage=None:

def setup(self, stage=None):
self.train_dataset = BioQADataset(
self.train_df,
self.tokenizer,
self.source_max_token_len,
self.target_max_token_len
)

self.test_dataset = BioQADataset(
self.test_df,
self.tokenizer,
self.source_max_token_len,
self.target_max_token_len
)

我自己使用 Pytorch Lightning 进行多 GPU 训练 here .虽然有些代码有点过时(指标现在是一个独立的模块),但您可能会发现它很有用。

关于python - 类型错误 : setup() got an unexpected keyword argument 'stage' ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/71922261/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com