- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试运行这个 PyTorch 人员检测示例:
https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
我正在使用 Ubuntu 18.04。以下是我执行的步骤的摘要:
1) 在带有 GTX 1650 GPU 的 Lenovo ThinkPad X1 Extreme Gen 2 上安装股票 Ubuntu 18.04。
2) 执行标准的 CUDA 10.0/cuDNN 7.4 安装。我不想重述所有步骤,因为这篇文章已经足够长了。这是一个标准程序,几乎所有通过谷歌搜索找到的链接都是我遵循的。
3) 安装 torch
和 torchvision
4) 从 PyTorch 网站上的这个链接:
https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
我保存了底部链接中可用的源:
https://pytorch.org/tutorials/_static/tv-training-code.py
到我创建的目录,PennFudanExample
5)我做了以下(在上面链接的笔记本的顶部找到):
将 CoCo API 安装到 Python 中:
cd ~
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
python3 setup.py build_ext --inplace
sudo python3 setup.py install
cd ~
git clone https://github.com/pytorch/vision.git
cd vision
git checkout v0.5.0
~/vision/references/detection
, 复制
coco_eval.py
,
coco_utils.py
,
engine.py
,
transforms.py
, 和
utils.py
到目录
PennFudanExample
.
PennFudanExample
tv-training-code.py
所做的唯一更改是将训练批次大小从 2 更改为 1 以防止 GPU 内存不足崩溃,请参阅我在此处发布的另一篇文章:
tv-training-code.py
当我使用我提到的轻微批量大小编辑运行它时:
# Sample code from the TorchVision 0.3 Object Detection Finetuning Tutorial
# http://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
import os
import numpy as np
import torch
from PIL import Image
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor
from engine import train_one_epoch, evaluate
import utils
import transforms as T
class PennFudanDataset(object):
def __init__(self, root, transforms):
self.root = root
self.transforms = transforms
# load all image files, sorting them to
# ensure that they are aligned
self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages"))))
self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))
def __getitem__(self, idx):
# load images ad masks
img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
img = Image.open(img_path).convert("RGB")
# note that we haven't converted the mask to RGB,
# because each color corresponds to a different instance
# with 0 being background
mask = Image.open(mask_path)
mask = np.array(mask)
# instances are encoded as different colors
obj_ids = np.unique(mask)
# first id is the background, so remove it
obj_ids = obj_ids[1:]
# split the color-encoded mask into a set
# of binary masks
masks = mask == obj_ids[:, None, None]
# get bounding box coordinates for each mask
num_objs = len(obj_ids)
boxes = []
for i in range(num_objs):
pos = np.where(masks[i])
xmin = np.min(pos[1])
xmax = np.max(pos[1])
ymin = np.min(pos[0])
ymax = np.max(pos[0])
boxes.append([xmin, ymin, xmax, ymax])
boxes = torch.as_tensor(boxes, dtype=torch.float32)
# there is only one class
labels = torch.ones((num_objs,), dtype=torch.int64)
masks = torch.as_tensor(masks, dtype=torch.uint8)
image_id = torch.tensor([idx])
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
# suppose all instances are not crowd
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
target["masks"] = masks
target["image_id"] = image_id
target["area"] = area
target["iscrowd"] = iscrowd
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.imgs)
def get_model_instance_segmentation(num_classes):
# load an instance segmentation model pre-trained pre-trained on COCO
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
# now get the number of input features for the mask classifier
in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels
hidden_layer = 256
# and replace the mask predictor with a new one
model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,
hidden_layer,
num_classes)
return model
def get_transform(train):
transforms = []
transforms.append(T.ToTensor())
if train:
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
def main():
# train on the GPU or on the CPU, if a GPU is not available
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# our dataset has two classes only - background and person
num_classes = 2
# use our dataset and defined transformations
dataset = PennFudanDataset('PennFudanPed', get_transform(train=True))
dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False))
# split the dataset in train and test set
indices = torch.randperm(len(dataset)).tolist()
dataset = torch.utils.data.Subset(dataset, indices[:-50])
dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])
# define training and validation data loaders
# !!!! CHANGE HERE !!!! For this function call, I changed the batch_size param value from 2 to 1, otherwise this file is exactly as provided from the PyTorch website !!!!
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=1, shuffle=True, num_workers=4,
collate_fn=utils.collate_fn)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=1, shuffle=False, num_workers=4,
collate_fn=utils.collate_fn)
# get the model using our helper function
model = get_model_instance_segmentation(num_classes)
# move model to the right device
model.to(device)
# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005,
momentum=0.9, weight_decay=0.0005)
# and a learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)
# let's train it for 10 epochs
num_epochs = 10
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
evaluate(model, data_loader_test, device=device)
print("That's it!")
if __name__ == "__main__":
main()
Epoch: [0] [ 0/120] eta: 0:01:41 lr: 0.000047 loss: 7.3028 (7.3028) loss_classifier: 1.0316 (1.0316) loss_box_reg: 0.0827 (0.0827) loss_mask: 6.1742 (6.1742) loss_objectness: 0.0097 (0.0097) loss_rpn_box_reg: 0.0046 (0.0046) time: 0.8468 data: 0.0803 max mem: 1067
Epoch: [0] [ 10/120] eta: 0:01:02 lr: 0.000467 loss: 2.0995 (3.5058) loss_classifier: 0.6684 (0.6453) loss_box_reg: 0.0999 (0.1244) loss_mask: 1.2471 (2.7069) loss_objectness: 0.0187 (0.0235) loss_rpn_box_reg: 0.0060 (0.0057) time: 0.5645 data: 0.0089 max mem: 1499
Epoch: [0] [ 20/120] eta: 0:00:56 lr: 0.000886 loss: 1.0166 (2.1789) loss_classifier: 0.2844 (0.4347) loss_box_reg: 0.1631 (0.1540) loss_mask: 0.4710 (1.5562) loss_objectness: 0.0187 (0.0242) loss_rpn_box_reg: 0.0082 (0.0099) time: 0.5524 data: 0.0020 max mem: 1704
Epoch: [0] [ 30/120] eta: 0:00:50 lr: 0.001306 loss: 0.5554 (1.6488) loss_classifier: 0.1258 (0.3350) loss_box_reg: 0.1356 (0.1488) loss_mask: 0.2355 (1.1285) loss_objectness: 0.0142 (0.0224) loss_rpn_box_reg: 0.0127 (0.0142) time: 0.5653 data: 0.0023 max mem: 1756
Epoch: [0] [ 40/120] eta: 0:00:45 lr: 0.001726 loss: 0.4520 (1.3614) loss_classifier: 0.1055 (0.2773) loss_box_reg: 0.1101 (0.1530) loss_mask: 0.1984 (0.8981) loss_objectness: 0.0063 (0.0189) loss_rpn_box_reg: 0.0139 (0.0140) time: 0.5621 data: 0.0023 max mem: 1776
Epoch: [0] [ 50/120] eta: 0:00:39 lr: 0.002146 loss: 0.3448 (1.1635) loss_classifier: 0.0622 (0.2346) loss_box_reg: 0.1004 (0.1438) loss_mask: 0.1650 (0.7547) loss_objectness: 0.0033 (0.0172) loss_rpn_box_reg: 0.0069 (0.0131) time: 0.5535 data: 0.0022 max mem: 1776
Epoch: [0] [ 60/120] eta: 0:00:33 lr: 0.002565 loss: 0.3292 (1.0543) loss_classifier: 0.0549 (0.2101) loss_box_reg: 0.1113 (0.1486) loss_mask: 0.1596 (0.6668) loss_objectness: 0.0017 (0.0148) loss_rpn_box_reg: 0.0082 (0.0140) time: 0.5590 data: 0.0022 max mem: 1776
Epoch: [0] [ 70/120] eta: 0:00:28 lr: 0.002985 loss: 0.4105 (0.9581) loss_classifier: 0.0534 (0.1877) loss_box_reg: 0.1049 (0.1438) loss_mask: 0.1709 (0.5995) loss_objectness: 0.0015 (0.0132) loss_rpn_box_reg: 0.0133 (0.0138) time: 0.5884 data: 0.0023 max mem: 1783
Epoch: [0] [ 80/120] eta: 0:00:22 lr: 0.003405 loss: 0.3080 (0.8817) loss_classifier: 0.0441 (0.1706) loss_box_reg: 0.0875 (0.1343) loss_mask: 0.1960 (0.5510) loss_objectness: 0.0015 (0.0122) loss_rpn_box_reg: 0.0071 (0.0137) time: 0.5812 data: 0.0023 max mem: 1783
Epoch: [0] [ 90/120] eta: 0:00:17 lr: 0.003825 loss: 0.2817 (0.8171) loss_classifier: 0.0397 (0.1570) loss_box_reg: 0.0499 (0.1257) loss_mask: 0.1777 (0.5098) loss_objectness: 0.0008 (0.0111) loss_rpn_box_reg: 0.0068 (0.0136) time: 0.5644 data: 0.0022 max mem: 1794
Epoch: [0] [100/120] eta: 0:00:11 lr: 0.004244 loss: 0.2139 (0.7569) loss_classifier: 0.0310 (0.1446) loss_box_reg: 0.0327 (0.1163) loss_mask: 0.1573 (0.4731) loss_objectness: 0.0003 (0.0101) loss_rpn_box_reg: 0.0050 (0.0128) time: 0.5685 data: 0.0022 max mem: 1794
Epoch: [0] [110/120] eta: 0:00:05 lr: 0.004664 loss: 0.2139 (0.7160) loss_classifier: 0.0325 (0.1358) loss_box_reg: 0.0327 (0.1105) loss_mask: 0.1572 (0.4477) loss_objectness: 0.0003 (0.0093) loss_rpn_box_reg: 0.0047 (0.0128) time: 0.5775 data: 0.0022 max mem: 1794
Epoch: [0] [119/120] eta: 0:00:00 lr: 0.005000 loss: 0.2486 (0.6830) loss_classifier: 0.0330 (0.1282) loss_box_reg: 0.0360 (0.1051) loss_mask: 0.1686 (0.4284) loss_objectness: 0.0003 (0.0086) loss_rpn_box_reg: 0.0074 (0.0125) time: 0.5655 data: 0.0022 max mem: 1794
Epoch: [0] Total time: 0:01:08 (0.5676 s / it)
creating index...
index created!
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/numpy/core/function_base.py", line 117, in linspace
num = operator.index(num)
TypeError: 'numpy.float64' object cannot be interpreted as an integer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/cdahms/workspace-apps/PennFudanExample/tv-training-code.py", line 166, in <module>
main()
File "/home/cdahms/workspace-apps/PennFudanExample/tv-training-code.py", line 161, in main
evaluate(model, data_loader_test, device=device)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
File "/home/cdahms/workspace-apps/PennFudanExample/engine.py", line 80, in evaluate
coco_evaluator = CocoEvaluator(coco, iou_types)
File "/home/cdahms/workspace-apps/PennFudanExample/coco_eval.py", line 28, in __init__
self.coco_eval[iou_type] = COCOeval(coco_gt, iouType=iou_type)
File "/home/cdahms/models/research/pycocotools/cocoeval.py", line 75, in __init__
self.params = Params(iouType=iouType) # parameters
File "/home/cdahms/models/research/pycocotools/cocoeval.py", line 527, in __init__
self.setDetParams()
File "/home/cdahms/models/research/pycocotools/cocoeval.py", line 506, in setDetParams
self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True)
File "<__array_function__ internals>", line 6, in linspace
File "/usr/local/lib/python3.6/dist-packages/numpy/core/function_base.py", line 121, in linspace
.format(type(num)))
TypeError: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer.
Process finished with exit code 1
torch
,
torchvision
,
pycocotools
, 以及复制文件
coco_eval.py
,
coco_utils.py
,
engine.py
,
transforms.py
, 和
utils.py
,我试过检查 torchvision v0.5.0、v0.4.2,并使用最新的提交,都产生相同的错误。
numpy
到 1.11.0,但该版本现在真的很旧,因此这可能会导致其他软件包出现问题。
int
添加一个类型转换。某处或更改除以
/
至
//
但我真的很犹豫要不要对
pycocotools
进行内部更改或更糟的内部
numpy
.此外,由于之前没有发生错误并且在另一台计算机上也没有发生错误,所以我不怀疑这是一个好主意。
evaluate(model, data_loader_test, device=device)
最佳答案
!@#$%^&
大约 15 个小时后,我终于弄清楚了这一点,因为 numpy 1.18.0(在我撰写本文时于 5 天前发布)破坏了 TensorFlow 和 PyTorch 对象检测的评估过程。长话短说,修复方法是:
sudo -H pip3 install numpy==1.17.4
sudo -H pip3 install pycocotools
pycocotools
中得到修复有了这个提交:
pycocotools
更新版本将进入
pycocotools pip3 package
, 我不知道。
关于python - PyTorch 和 TensorFlow 对象检测 - 评估 - <class 'numpy.float64' > 类型的对象不能安全地解释为整数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59493606/
我的一位教授给了我们一些考试练习题,其中一个问题类似于下面(伪代码): a.setColor(blue); b.setColor(red); a = b; b.setColor(purple); b
我似乎经常使用这个测试 if( object && object !== "null" && object !== "undefined" ){ doSomething(); } 在对象上,我
C# Object/object 是值类型还是引用类型? 我检查过它们可以保留引用,但是这个引用不能用于更改对象。 using System; class MyClass { public s
我在通过 AJAX 发送 json 时遇到问题。 var data = [{"name": "Will", "surname": "Smith", "age": "40"},{"name": "Wil
当我尝试访问我的 View 中的对象 {{result}} 时(我从 Express js 服务器发送该对象),它只显示 [object][object]有谁知道如何获取 JSON 格式的值吗? 这是
我有不同类型的数据(可能是字符串、整数......)。这是一个简单的例子: public static void main(String[] args) { before("one"); }
嗨,我是 json 和 javascript 的新手。 我在这个网站找到了使用json数据作为表格的方法。 我很好奇为什么当我尝试使用 json 数据作为表时,我得到 [Object,Object]
已关闭。此问题需要 debugging details 。目前不接受答案。 编辑问题以包含 desired behavior, a specific problem or error, and the
我听别人说 null == object 比 object == null check 例如: void m1(Object obj ) { if(null == obj) // Is thi
Match 对象 提供了对正则表达式匹配的只读属性的访问。 说明 Match 对象只能通过 RegExp 对象的 Execute 方法来创建,该方法实际上返回了 Match 对象的集合。所有的
Class 对象 使用 Class 语句创建的对象。提供了对类的各种事件的访问。 说明 不允许显式地将一个变量声明为 Class 类型。在 VBScript 的上下文中,“类对象”一词指的是用
Folder 对象 提供对文件夹所有属性的访问。 说明 以下代码举例说明如何获得 Folder 对象并查看它的属性: Function ShowDateCreated(f
File 对象 提供对文件的所有属性的访问。 说明 以下代码举例说明如何获得一个 File 对象并查看它的属性: Function ShowDateCreated(fil
Drive 对象 提供对磁盘驱动器或网络共享的属性的访问。 说明 以下代码举例说明如何使用 Drive 对象访问驱动器的属性: Function ShowFreeSpac
FileSystemObject 对象 提供对计算机文件系统的访问。 说明 以下代码举例说明如何使用 FileSystemObject 对象返回一个 TextStream 对象,此对象可以被读
我是 javascript OOP 的新手,我认为这是一个相对基本的问题,但我无法通过搜索网络找到任何帮助。我是否遗漏了什么,或者我只是以错误的方式解决了这个问题? 这是我的示例代码: functio
我可以很容易地创造出很多不同的对象。例如像这样: var myObject = { myFunction: function () { return ""; } };
function Person(fname, lname) { this.fname = fname, this.lname = lname, this.getName = function()
任何人都可以向我解释为什么下面的代码给出 (object, Object) 吗? (console.log(dope) 给出了它应该的内容,但在 JSON.stringify 和 JSON.parse
我正在尝试完成散点图 exercise来自免费代码营。然而,我现在只自己学习了 d3 几个小时,在遵循 lynda.com 的教程后,我一直在尝试确定如何在工具提示中显示特定数据。 This code
我是一名优秀的程序员,十分优秀!