- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
我想使用 Google Vision API 的文本检测功能从申请表中提取手写文本。它极大地提取了手写文本,但给出了非常无组织的 JSON 类型响应,我不知道如何解析它,因为我只想提取特定字段,如姓名、联系电话、电子邮件等,并将它们存储到 MySQL 数据库中。
代码(https://cloud.google.com/vision/docs/detecting-fulltext#vision-document-text-detection-python):
def detect_document(path):
"""Detects document features in an image."""
from google.cloud import vision
client = vision.ImageAnnotatorClient()
with io.open(path, 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.document_text_detection(image=image)
for page in response.full_text_annotation.pages:
for block in page.blocks:
print('\nBlock confidence: {}\n'.format(block.confidence))
for paragraph in block.paragraphs:
print('Paragraph confidence: {}'.format(
paragraph.confidence))
for word in paragraph.words:
word_text = ''.join([
symbol.text for symbol in word.symbols
])
print('Word text: {} (confidence: {})'.format(
word_text, word.confidence))
#for symbol in word.symbols:
# print('\tSymbol: {} (confidence: {})'.format(
# symbol.text, symbol.confidence))
API 响应:
'Block confidence: 0.9900000095367432
Paragraph confidence: 0.9900000095367432
Word text: A (confidence: 0.9900000095367432)
Word text: . (confidence: 0.9900000095367432)
Word text: Bank (confidence: 0.9900000095367432)
Word text: Challan (confidence: 0.9900000095367432)
Paragraph confidence: 0.9900000095367432
Word text: Bank (confidence: 0.9900000095367432)
Word text: Branch (confidence: 0.9800000190734863)
Block confidence: 0.44999998807907104
Paragraph confidence: 0.44999998807907104
Word text: ca (confidence: 0.44999998807907104)
Block confidence: 0.7099999785423279
Paragraph confidence: 0.7099999785423279
Word text: ABC (confidence: 0.9900000095367432)
Word text: muitce (confidence: 0.5699999928474426)
Block confidence: 0.7400000095367432
Paragraph confidence: 0.7400000095367432
Word text: Deposit (confidence: 0.8700000047683716)
Word text: ID (confidence: 0.6700000166893005)
Word text: VOSSÁETM (confidence: 0.5400000214576721)
Word text: - (confidence: 0.7900000214576721)
Word text: 0055 (confidence: 0.9300000071525574)
Block confidence: 0.800000011920929
Paragraph confidence: 0.800000011920929
Word text: Deposit (confidence: 0.9800000190734863)
Word text: Date (confidence: 0.9200000166893005)
Word text: 14 (confidence: 0.47999998927116394)
Word text: al (confidence: 0.27000001072883606)
Word text: 19 (confidence: 0.800000011920929)
Block confidence: 0.9900000095367432
Paragraph confidence: 0.9900000095367432
Word text: ate (confidence: 0.9900000095367432)
Block confidence: 0.9900000095367432
Paragraph confidence: 0.9900000095367432
Word text: B (confidence: 0.9900000095367432)
Word text: . (confidence: 0.9900000095367432)
Word text: Personal (confidence: 0.9900000095367432)
Word text: Information (confidence: 0.9900000095367432)
Word text: : (confidence: 0.9900000095367432)
Word text: Use (confidence: 0.9900000095367432)
Word text: CAPITAL (confidence: 0.9900000095367432)
Word text: letters (confidence: 0.9900000095367432)
Word text: and (confidence: 0.9900000095367432)
Word text: leave (confidence: 0.9900000095367432)
Word text: spaces (confidence: 0.9900000095367432)
Word text: between (confidence: 0.9900000095367432)
Word text: words (confidence: 0.9900000095367432)
Word text: . (confidence: 0.9900000095367432)
Block confidence: 0.9100000262260437
Paragraph confidence: 0.8999999761581421
Word text: Name (confidence: 0.9900000095367432)
Word text: : (confidence: 0.9700000286102295)
Word text: MUHAMMAD (confidence: 0.9599999785423279)
Word text: HANIE (confidence: 0.9200000166893005)
Word text: Father (confidence: 0.9900000095367432)
Word text: (confidence: 0.9900000095367432)
Word text: s (confidence: 1.0)
Word text: Name (confidence: 0.9900000095367432)
Word text: : (confidence: 0.9900000095367432)
Word text: MUHAMMAD (confidence: 0.9100000262260437)
Word text: Y (confidence: 0.8399999737739563)
Word text: AQOOB (confidence: 0.8500000238418579)
Word text: Computerized (confidence: 0.9800000190734863)
Word text: NIC (confidence: 0.9900000095367432)
Word text: No (confidence: 0.5400000214576721)
Word text: . (confidence: 0.9100000262260437)
Word text: 77 (confidence: 0.8899999856948853)
Word text: 356 (confidence: 0.8100000023841858)
Word text: - (confidence: 0.699999988079071)
Word text: 5 (confidence: 0.8600000143051147)
Word text: 284 (confidence: 0.5699999928474426)
Word text: 345 (confidence: 0.800000011920929)
Word text: - (confidence: 0.41999998688697815)
Word text: 3 (confidence: 0.8199999928474426)
Paragraph confidence: 0.8999999761581421
Word text: D (confidence: 0.699999988079071)
Word text: D (confidence: 0.5600000023841858)
Word text: M (confidence: 0.6700000166893005)
Word text: m (confidence: 0.6600000262260437)
Word text: rrrr (confidence: 0.6600000262260437)
Word text: Gender (confidence: 0.9900000095367432)
Word text: : (confidence: 1.0)
Word text: Male (confidence: 0.9900000095367432)
Word text: Age (confidence: 0.9800000190734863)
Word text: : (confidence: 0.9700000286102295)
Word text: ( (confidence: 0.9399999976158142)
Word text: in (confidence: 0.9700000286102295)
Word text: years (confidence: 0.9900000095367432)
Word text: ) (confidence: 0.9599999785423279)
Word text: 24 (confidence: 0.6499999761581421)
Word text: Date (confidence: 0.9900000095367432)
Word text: of (confidence: 0.9900000095367432)
Word text: Birth (confidence: 0.9900000095367432)
Word text: ( (confidence: 0.12999999523162842)
Word text: 4 (confidence: 0.9399999976158142)
Word text: - (confidence: 0.8999999761581421)
Word text: 06 (confidence: 0.9100000262260437)
Word text: - (confidence: 0.7400000095367432)
Word text: 1999 (confidence: 0.5099999904632568)
Word text: Domicile (confidence: 0.9900000095367432)
Word text: ( (confidence: 0.949999988079071)
Word text: District (confidence: 0.9900000095367432)
Word text: ) (confidence: 0.9900000095367432)
Word text: : (confidence: 0.9599999785423279)
Word text: Mirpuskhas (confidence: 0.8399999737739563)
Word text: Contact (confidence: 0.9399999976158142)
Word text: No (confidence: 0.9100000262260437)
Word text: . (confidence: 0.9900000095367432)
Word text: 0333 (confidence: 0.9900000095367432)
Word text: - (confidence: 0.9800000190734863)
Word text: 7072258 (confidence: 0.9900000095367432)
Paragraph confidence: 0.9200000166893005
Word text: ( (confidence: 0.9900000095367432)
Word text: Please (confidence: 0.9900000095367432)
Word text: do (confidence: 0.9800000190734863)
Word text: not (confidence: 1.0)
Word text: mention (confidence: 0.9900000095367432)
Word text: converted (confidence: 0.949999988079071)
Word text: No (confidence: 0.9900000095367432)
Word text: . (confidence: 0.9900000095367432)
Word text: ) (confidence: 0.9800000190734863)
Word text: Postal (confidence: 0.9900000095367432)
Word text: Address (confidence: 0.9800000190734863)
Word text: : (confidence: 0.9900000095367432)
Word text: Wasdev (confidence: 0.9300000071525574)
Word text: Book (confidence: 0.8799999952316284)
Word text: Depo (confidence: 0.9599999785423279)
Word text: Digri (confidence: 0.9599999785423279)
Word text: Taluka (confidence: 0.9900000095367432)
Word text: jhuddo (confidence: 0.9700000286102295)
Word text: Disstri (confidence: 0.7599999904632568)
Word text: mes (confidence: 0.38999998569488525)
Word text: . (confidence: 0.1899999976158142)
Block confidence: 0.9399999976158142
Paragraph confidence: 0.9399999976158142
Word text: Sindh (confidence: 0.9700000286102295)
Word text: . (confidence: 0.75)
Block confidence: 0.9800000190734863
Paragraph confidence: 0.9800000190734863
Word text: Are (confidence: 0.9900000095367432)
Word text: You (confidence: 0.9900000095367432)
Word text: Government (confidence: 0.9800000190734863)
Word text: Servant (confidence: 0.9900000095367432)
Word text: : (confidence: 0.9900000095367432)
Word text: Yes (confidence: 0.9900000095367432)
Word text: ( (confidence: 0.9900000095367432)
Word text: If (confidence: 0.7599999904632568)
Word text: yes (confidence: 0.9700000286102295)
Word text: , (confidence: 0.9900000095367432)
Word text: please (confidence: 0.9900000095367432)
Word text: attach (confidence: 0.9900000095367432)
Word text: NOC (confidence: 0.9900000095367432)
Word text: ) (confidence: 0.949999988079071)
Block confidence: 0.9900000095367432
Paragraph confidence: 0.9900000095367432
Word text: No (confidence: 0.9900000095367432)
Block confidence: 0.20999999344348907
Paragraph confidence: 0.20999999344348907
Word text: ✓ (confidence: 0.20999999344348907)
Block confidence: 0.9700000286102295
Paragraph confidence: 0.9700000286102295
Word text: Religion (confidence: 0.9599999785423279)
Word text: : (confidence: 0.9900000095367432)
Word text: Muslim (confidence: 0.9900000095367432)
Block confidence: 0.3799999952316284
Paragraph confidence: 0.3799999952316284
Word text: ✓ (confidence: 0.3799999952316284)
Block confidence: 0.9599999785423279
Paragraph confidence: 0.9599999785423279
Word text: Non (confidence: 0.9300000071525574)
Word text: - (confidence: 0.9399999976158142)
Word text: Muslimo (confidence: 0.9700000286102295)'
最佳答案
将整个“for 循环”替换为 print (response.full_text_annotation.text)
关于python - 使用 Google Cloud Vision API 从申请表中提取手写文本,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56065772/
我正在从 spring boot maven 项目调用 google vision OCR api 以从图像中提取文本。 public class TestGoogleVision { Buffere
是否可以使用 Google Vision API 读取姓名、地址、出生日期等身份证信息?在文档中,我找到了一些东西,但我不知道如何使用它。 https://developers.google.com/
请看两个测试结果。 有两种语言,但 Cloud vision api 总是以一种语言返回结果。 我们能否告诉图像中需要哪种语言,以便引擎可以尝试识别所有字符,即使它们是不同的语言? 1. 原图有3个汉
如何调用 Vision API 并在图像上应用多个功能。 我想在图像上同时应用标签检测和地标检测 最佳答案 您可以如下定义您的请求,以在每个图像中包含多个功能请求 "requests":[
我正在探索 Cloud Vision API 的功能,我想知道是否有任何方法可以检测标签检测下对象的尺寸。例如,如果您在街上拍摄汽车的照片,则 Cloud Vision API 将返回汽车的尺寸(长度
首先,请原谅我的英语不好。我在里面工作。 我正在从事计算机视觉应用方面的工作。我正在使用网络摄像头。主循环是这样的: while true get frame process
我正在尝试训练一个模型来识别图像中的某些标签。我尝试使用 1 小时免费版本,一小时后培训结束。结果并不像我想要的那么准确,所以我冒险选择了没有定义训练模型的具体时间限制的选项。 此时,它显示“训练视觉
我试图识别的最简单的例子: 我用 DOCUMENT_TEXT_DETECTION ,但在答案中我得到了象形文字。 如果我使用 Eng在 ImageContext addAllLanguageHints
我将其交叉发布到 Cloud Vision 的谷歌组... 并添加了一些额外的发现。 以下是我认为相关的所有细节: 使用 VB.NET 2010 使用服务帐号认证 仅限于 .NET 4.0 使用这些
我正在尝试使用 Google Vision API。我正在关注 getting started guide : 我已启用 Cloud Vision API 我已启用计费 我已经设置了 API key
我对使用Microsoft的认知服务还很陌生。我想知道MS Computer Vision API和MS Custom Vision API有什么区别? 最佳答案 它们都处理图像上的计算机视觉,但是希
知道如何将规范化顶点转换为顶点吗?归一化顶点给出了图像上的相对位置,而顶点根据图像的比例返回坐标。我有一组标准化顶点,我想将其转换为常规顶点。 https://cloud.google.com/vis
我正在使用 google cloud vision api 来分析图片。是否有 labelAnnotations 方法的所有可能响应的列表? 最佳答案 API reference Vision API
Google Cloud Vision API(测试版)的第 1 版允许通过 TEXT_DETECTION 请求进行光学字符识别。虽然识别质量很好,但返回的字符没有任何原始布局的暗示。因此,结构化文本
假设我有图像并且我想用西类牙语为它们生成标签 - Google Cloud Vision API 是否允许选择以哪种语言返回标签? 最佳答案 标签检测 Google Cloud Vision API
我使用 import torchvision 时遇到的错误这是: 错误信息 "*Traceback (most recent call last): File "/Users/gokulsrin/
我正在为 Google Cloud Vision API 使用 Python 客户端,与文档中的代码基本相同 http://google-cloud-python.readthedocs.io/en/
我正在查看 Google AutoML Vision API 和 Google Vision API。我知道,如果您使用 Google AutoML Vision API,那么它就是一个自定义模型,因
我正在查看 Google AutoML Vision API 和 Google Vision API。我知道,如果您使用 Google AutoML Vision API,那么它就是一个自定义模型,因
由于火线相机由于带宽限制而变得过时,相机制造商似乎正在转向 USB 3.0 或千兆以太网接口(interface)。两者都有许多制造商都遵守的标准 USB3 Vision 和 GigE Vision。
我是一名优秀的程序员,十分优秀!