- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在学习使用 pytorch 服务来服务模型,我是这个服务的新手这是我为提供 vgg16 模型而创建的处理程序文件我正在使用来自 kaggle 的模型
我的handler.py文件
import io
import os
import logging
import torch
import numpy as np
import torch.nn.functional as F
from PIL import Image
from torchvision import transforms,datasets, models
from ts.torch_handler.image_classifier import ImageClassifier
from ts.torch_handler.base_handler import BaseHandler
from ts.utils.util import list_classes_from_module
import importlib
from torch.autograd import Variable
import seaborn as sns
import torchvision
from torch import optim, cuda
from torch.utils.data import DataLoader, sampler
import torch.nn as nn
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
# Data science tools
import pandas as pd
#path = 'C:\\Users\\fazil\\OneDrive\\Desktop\\pytorch\\vgg11\\vgg16.pt'
path = r'C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\vgg16.pt'
#image = r'C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\normal.jpeg'
class VGGImageClassifier(ImageClassifier):
"""
Overriding the model loading code as a workaround for issue :
https://github.com/pytorch/serve/issues/535
https://github.com/pytorch/vision/issues/2473
"""
def __init__(self):
self.model = None
self.mapping = None
self.device = None
self.initialized = False
def initialize(self,context):
"""load eager mode state_dict based model"""
properties = context.system_properties
#self.device = torch.device(
#"cuda:" + str(properties.get("gpu_id"))
#if torch.cuda.is_available()
# else "cpu"
#)
model_dir = properties.get("model_dir")
model_pt_path = os.path.join(model_dir, "model.pt")
# Read model definition file
model_def_path = os.path.join(model_dir, "model.py")
if not os.path.isfile(model_def_path):
raise RuntimeError("Missing the model definition file")
checkpoint = torch.load(path, map_location='cpu')
logging.error('%s ',checkpoint)
self.model = models.vgg16(pretrained=True)
logging.error('%s ',self.model)
self.model.classifier = checkpoint['classifier']
logging.error('%s ', self.model.classifier )
self.model.load_state_dict(checkpoint['state_dict'], strict=False)
self.model.class_to_idx = checkpoint['class_to_idx']
self.model.idx_to_class = checkpoint['idx_to_class']
self.model.epochs = checkpoint['epochs']
optimizer = checkpoint['optimizer']
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
for param in model.parameters():
param.requires_grad = False
logger.debug('Model file {0} loaded successfully'.format(model_pt_path))
self.initialized = True
def preprocess(self,data):
image = data.get("data")
if image is None:
image = data.get("body")
image_transform =transforms.Compose([
transforms.Resize(size=256),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize((0.5), (0.5))
])
image = Image.open(io.BytesIO(image)).convert('RGB')
image = image_transform(image)
image = image.unsqueeze(0)
return image
def inference(self, image):
outs = self.model.forward(image)
probs = F.softmax(outs , dim=1)
preds = torch.argmax(probs, dim=1)
logging.error('%s ',preds)
return preds
def postprocess(self, preds):
res = []
preds = preds.cpu().tolist()
for pred in preds:
label = self.mapping[str(pred)] [1]
res.append({'label': label , 'index': pred })
return res
_service = VGGImageClassifier()
def handle(data,context):
if not _service.initialized:
_service.initialize(context)
if data is None:
return None
data = _service.preprocess(data)
data = _service.inference(data)
data = _service.postprocess(data)
return data
这是我得到的错误
Torchserve version: 0.3.1
TS Home: C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages
Current directory: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11
Temp directory: C:\Users\fazil\AppData\Local\Temp
Number of GPUs: 0
Number of CPUs: 4
Max heap size: 3038 M
Python executable: c:\users\fazil\anaconda3\envs\serve\python.exe
Config file: ./config.properties
Inference address: http://0.0.0.0:8080
Management address: http://0.0.0.0:8081
Metrics address: http://0.0.0.0:8082
Model Store: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\model_store
Initial Models: vgg16.mar
Log dir: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\logs
Metrics dir: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\logs
Netty threads: 32
Netty client threads: 0
Default workers per model: 4
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Metrics report format: prometheus
Enable metrics API: true
2021-04-08 12:33:22,517 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: vgg16.mar
2021-04-08 12:33:40,392 [INFO ] main org.pytorch.serve.archive.ModelArchive - eTag 85b61fc819804aea9db0ca8786c2e427
2021-04-08 12:33:40,423 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model vgg16
2021-04-08 12:33:40,424 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model vgg16
2021-04-08 12:33:40,424 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model vgg16 loaded.
2021-04-08 12:33:40,426 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: vgg16, count: 4
2021-04-08 12:33:40,481 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: NioServerSocketChannel.
2021-04-08 12:33:41,173 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,177 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,180 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]12328
2021-04-08 12:33:41,180 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]14588
2021-04-08 12:33:41,180 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,181 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,181 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,181 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,186 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,186 [DEBUG] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,199 [INFO ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9001
2021-04-08 12:33:41,199 [INFO ] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9002
2021-04-08 12:33:41,240 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,244 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]12008
2021-04-08 12:33:41,244 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,245 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,245 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,245 [INFO ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9000
2021-04-08 12:33:41,255 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,260 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]15216
2021-04-08 12:33:41,260 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,261 [DEBUG] W-9003-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,261 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,262 [INFO ] W-9003-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9003
2021-04-08 12:33:41,768 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080
2021-04-08 12:33:41,768 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: NioServerSocketChannel.
2021-04-08 12:33:41,774 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081
2021-04-08 12:33:41,775 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: NioServerSocketChannel.
2021-04-08 12:33:41,777 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082
2021-04-08 12:33:41,784 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9001).
2021-04-08 12:33:41,784 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9002).
2021-04-08 12:33:41,784 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9000).
2021-04-08 12:33:41,784 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9003).
Model server started.
2021-04-08 12:33:48,486 [INFO ] pool-2-thread-1 TS_METRICS - CPUUtilization.Percent:100.0|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,487 [INFO ] pool-2-thread-1 TS_METRICS - DiskAvailable.Gigabytes:74.49674987792969|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,491 [INFO ] pool-2-thread-1 TS_METRICS - DiskUsage.Gigabytes:147.9403419494629|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,496 [INFO ] pool-2-thread-1 TS_METRICS - DiskUtilization.Percent:66.5|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,499 [INFO ] pool-2-thread-1 TS_METRICS - MemoryAvailable.Megabytes:4488.515625|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,504 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUsed.Megabytes:7658.80859375|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,513 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUtilization.Percent:63.0|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:34:24,385 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-04-08 12:34:24,439 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-04-08 12:34:24,440 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 182, in <module>
2021-04-08 12:34:24,443 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-04-08 12:34:24,444 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 154, in run_server
2021-04-08 12:34:24,446 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-04-08 12:34:24,446 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 116, in handle_connection
2021-04-08 12:34:24,447 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-04-08 12:34:24,448 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 89, in load_model
2021-04-08 12:34:24,523 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-04-08 12:34:24,582 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 104, in load
2021-04-08 12:34:24,597 [INFO ] nioEventLoopGroup-5-2 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED
2021-04-08 12:34:24,583 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-04-08 12:34:24,646 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-04-08 12:34:24,646 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-04-08 12:34:24,649 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 182, in <module>
2021-04-08 12:34:24,649 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-04-08 12:34:24,650 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 154, in run_server
2021-04-08 12:34:24,648 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 131, in <lambda>
2021-04-08 12:34:24,652 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-04-08 12:34:24,649 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-04-08 12:34:24,734 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 116, in handle_connection
2021-04-08 12:34:24,653 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn = lambda ctx: entry_point(None, ctx)
2021-04-08 12:34:24,734 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-04-08 12:34:24,735 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 268, in handle
2021-04-08 12:34:24,735 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-04-08 12:34:24,736 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 89, in load_model
2021-04-08 12:34:24,753 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-04-08 12:34:24,736 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)
2021-04-08 12:34:24,754 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 111, in initialize
2021-04-08 12:34:24,754 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 104, in load
2021-04-08 12:34:24,754 [WARN ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: vgg16, error: Worker died.
2021-04-08 12:34:24,756 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-04-08 12:34:24,755 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model.classifier = checkpoint['classifier']
2021-04-08 12:34:24,758 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 131, in <lambda>
2021-04-08 12:34:24,810 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn = lambda ctx: entry_point(None, ctx)
2021-04-08 12:34:24,811 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 268, in handle
2021-04-08 12:34:24,757 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-vgg16_1.0 State change WORKER_STARTED -> WORKER_STOPPED
2021-04-08 12:34:24,871 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)
2021-04-08 12:34:24,872 [WARN ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-vgg16_1.0-stderr
2021-04-08 12:34:24,812 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - KeyError: 'classifier'
2021-04-08 12:34:24,872 [WARN ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-vgg16_1.0-stdout
2021-04-08 12:34:24,872 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 111, in initialize
2021-04-08 12:34:24,874 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model.classifier = checkpoint['classifier']
2021-04-08 12:34:24,903 [INFO ] nioEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - 9001 Worker disconnected. WORKER_STARTED
2021-04-08 12:34:24,876 [INFO ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 1 seconds.
2021-04-08 12:34:24,931 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - KeyError: 'classifier'
2021-04-08 12:34:24,932 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-04-08 12:34:24,974 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-04-08 12:34:25,015 [WARN ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: vgg16, error: Worker died.
2021-04-08 12:34:25,015 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-vgg16_1.0 State change WORKER_STARTED -> WORKER_STOPPED
2021-04-08 12:34:25,016 [WARN ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-vgg16_1.0-stderr
2021-04-08 12:34:25,017 [WARN ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-vgg16_1.0-stdout
2021-04-08 12:34:25,017 [INFO ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9001 in 1 seconds.
2021-04-08 12:34:25,038 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-vgg16_1.0-stdout
2021-04-08 12:34:25,038 [INFO ] W-9000-vgg16_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-vgg16_1.0-stderr
2021-04-08 12:34:25,085 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-vgg16_1.0-stdout
2021-04-08 12:34:25,085 [INFO ] W-9001-vgg16_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-vgg16_1.0-stderr
2021-04-08 12:34:25,247 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-04-08 12:34:25,247 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-04-08 12:34:25,248 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 182, in <module>
2021-04-08 12:34:25,248 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-04-08 12:34:25,248 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 154, in run_server
2021-04-08 12:34:25,249 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-04-08 12:34:25,250 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 116, in handle_connection
2021-04-08 12:34:25,250 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-04-08 12:34:25,251 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 89, in load_model
2021-04-08 12:34:25,251 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-04-08 12:34:25,253 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 104, in load
2021-04-08 12:34:25,253 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-04-08 12:34:25,254 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 131, in <lambda>
2021-04-08 12:34:25,254 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn = lambda ctx: entry_point(None, ctx)
2021-04-08 12:34:25,254 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 268, in handle
2021-04-08 12:34:25,255 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)
2021-04-08 12:34:25,256 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 111, in initialize
2021-04-08 12:34:25,257 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model.classifier = checkpoint['classifier']
2021-04-08 12:34:25,257 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - KeyError: 'classifier'
2021-04-08 12:34:25,454 [INFO ] nioEventLoopGroup-5-4 org.pytorch.serve.wlm.WorkerThread - 9002 Worker disconnected. WORKER_STARTED
2021-04-08 12:34:25,456 [DEBUG] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-04-08 12:34:25,457 [DEBUG] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-04-08 12:34:25,482 [WARN ] W-9002-vgg16_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: vgg16, error: Worker died.
我还从路径加载模型因为如果我使用 model_pt_path 我会出错有人可以帮我吗
最佳答案
i am using the model from kaggle
我假设您从 https://www.kaggle.com/pytorch/vgg16 获得了模型
我认为您加载的模型不正确。您正在加载一个检查点,如果您的模型是这样保存的,它将起作用:
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
不过大概是这样保存的:
torch.save(model.state_dict(), PATH)
这可以解释 KeyError。我根据第二种情况修改了initialize方法:
def initialize(self,context):
"""load eager mode state_dict based model"""
properties = context.system_properties
model_dir = properties.get("model_dir")
model_pt_path = os.path.join(model_dir, "model.pt")
# Read model definition file
model_def_path = os.path.join(model_dir, "model.py")
if not os.path.isfile(model_def_path):
raise RuntimeError("Missing the model definition file")
state_dict = torch.load(path, map_location='cpu')
# logging.error('%s ',checkpoint)
self.model = models.vgg16(pretrained=True)
logging.error('%s ',self.model)
# self.model.classifier = checkpoint['classifier']
# logging.error('%s ', self.model.classifier )
self.model.load_state_dict(state_dict, strict=False)
# self.model.class_to_idx = checkpoint['class_to_idx']
# self.model.idx_to_class = checkpoint['idx_to_class']
# self.model.epochs = checkpoint['epochs']
# optimizer = checkpoint['optimizer']
# optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
# for param in model.parameters():
# param.requires_grad = False
# logger.debug('Model file {0} loaded successfully'.format(model_pt_path))
self.initialized = True
使用上面链接的模型,我成功地启动了 torchserve。
关于python - torch 服务加载模型失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67000060/
我正在处理一组标记为 160 个组的 173k 点。我想通过合并最接近的(到 9 或 10 个组)来减少组/集群的数量。我搜索过 sklearn 或类似的库,但没有成功。 我猜它只是通过 knn 聚类
我有一个扁平数字列表,这些数字逻辑上以 3 为一组,其中每个三元组是 (number, __ignored, flag[0 or 1]),例如: [7,56,1, 8,0,0, 2,0,0, 6,1,
我正在使用 pipenv 来管理我的包。我想编写一个 python 脚本来调用另一个使用不同虚拟环境(VE)的 python 脚本。 如何运行使用 VE1 的 python 脚本 1 并调用另一个 p
假设我有一个文件 script.py 位于 path = "foo/bar/script.py"。我正在寻找一种在 Python 中通过函数 execute_script() 从我的主要 Python
这听起来像是谜语或笑话,但实际上我还没有找到这个问题的答案。 问题到底是什么? 我想运行 2 个脚本。在第一个脚本中,我调用另一个脚本,但我希望它们继续并行,而不是在两个单独的线程中。主要是我不希望第
我有一个带有 python 2.5.5 的软件。我想发送一个命令,该命令将在 python 2.7.5 中启动一个脚本,然后继续执行该脚本。 我试过用 #!python2.7.5 和http://re
我在 python 命令行(使用 python 2.7)中,并尝试运行 Python 脚本。我的操作系统是 Windows 7。我已将我的目录设置为包含我所有脚本的文件夹,使用: os.chdir("
剧透:部分解决(见最后)。 以下是使用 Python 嵌入的代码示例: #include int main(int argc, char** argv) { Py_SetPythonHome
假设我有以下列表,对应于及时的股票价格: prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11] 我想确定以下总体上最
所以我试图在选择某个单选按钮时更改此框架的背景。 我的框架位于一个类中,并且单选按钮的功能位于该类之外。 (这样我就可以在所有其他框架上调用它们。) 问题是每当我选择单选按钮时都会出现以下错误: co
我正在尝试将字符串与 python 中的正则表达式进行比较,如下所示, #!/usr/bin/env python3 import re str1 = "Expecting property name
考虑以下原型(prototype) Boost.Python 模块,该模块从单独的 C++ 头文件中引入类“D”。 /* file: a/b.cpp */ BOOST_PYTHON_MODULE(c)
如何编写一个程序来“识别函数调用的行号?” python 检查模块提供了定位行号的选项,但是, def di(): return inspect.currentframe().f_back.f_l
我已经使用 macports 安装了 Python 2.7,并且由于我的 $PATH 变量,这就是我输入 $ python 时得到的变量。然而,virtualenv 默认使用 Python 2.6,除
我只想问如何加快 python 上的 re.search 速度。 我有一个很长的字符串行,长度为 176861(即带有一些符号的字母数字字符),我使用此函数测试了该行以进行研究: def getExe
list1= [u'%app%%General%%Council%', u'%people%', u'%people%%Regional%%Council%%Mandate%', u'%ppp%%Ge
这个问题在这里已经有了答案: Is it Pythonic to use list comprehensions for just side effects? (7 个答案) 关闭 4 个月前。 告
我想用 Python 将两个列表组合成一个列表,方法如下: a = [1,1,1,2,2,2,3,3,3,3] b= ["Sun", "is", "bright", "June","and" ,"Ju
我正在运行带有最新 Boost 发行版 (1.55.0) 的 Mac OS X 10.8.4 (Darwin 12.4.0)。我正在按照说明 here构建包含在我的发行版中的教程 Boost-Pyth
学习 Python,我正在尝试制作一个没有任何第 3 方库的网络抓取工具,这样过程对我来说并没有简化,而且我知道我在做什么。我浏览了一些在线资源,但所有这些都让我对某些事情感到困惑。 html 看起来
我是一名优秀的程序员,十分优秀!