- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
使用 gsutil 将大量文件并行下载到本地机器是微不足道的。通过添加 -m
旗帜:
gsutil -m cp gs://my-bucket/blob_prefix* .
client = storage.Client()
bucket = client.get_bucket(gs_bucket_name)
blobs = [blob for blob in bucket.list_blobs(prefix=blob_prefix)]
for blob in blobs:
blob.download_to_filename(filename)
blob.download_as_string()
),最好是下载到生成器中。顺序真的不重要。
def fetch_data_from_storage(fetch_pattern):
"""Download blobs to local first, then load them into Generator."""
tmp_save_dir = os.path.join("/tmp", "tmp_gs_download")
if os.path.isdir(tmp_save_dir):
shutil.rmtree(tmp_save_dir) # empty tmp dir first
os.makedirs(tmp_save_dir) # create tmp dir
download_command = ["gsutil", "-m", "cp", "gs://{}/{}".format(bucket.name, fetch_pattern), tmp_save_dir]
resp = subprocess.call(download_command)
for file in os.listdir(tmp_save_dir):
with open(os.path.join(tmp_save_dir, file), 'r') as f_data:
content = json.load(f_data)
yield content
最佳答案
好的,这是我的多处理多线程解决方案。下面是它的工作原理:
gsutil ls -l pattern
获取 blob 名称及其文件大小的列表。这是在 __main__
中输入的模式cpu_count - 2
gsutil -m cp
(无元数据):0m35s from typing import Iterable, Generator
import logging
import json
import datetime
import re
import subprocess
import multiprocessing
import threading
from google.cloud import storage
logging.basicConfig(level='INFO')
logger = logging.getLogger(__name__)
class StorageDownloader:
def __init__(self, bucket_name):
self.bucket_name = bucket_name
self.bucket = storage.Client().bucket(bucket_name)
def create_blob_batches_by_pattern(self, fetch_pattern, max_batch_size=1e6):
"""Fetch all blob names according to the pattern and the blob size.
:param fetch_pattern: The gsutil matching pattern for files we want to download.
A gsutil pattern is used instead of blob prefix because it is more powerful.
:type fetch_pattern: str
:param max_batch_size: Maximum size per batch in bytes. Default = 1 MB = 1e6 bytes
:type max_batch_size: float or int
:return: Generator of batches of blob names.
:rtype: Generator of list
"""
download_command = ["gsutil", "ls", "-l", "gs://{}/{}".format(self.bucket.name, fetch_pattern)]
logger.info("Gsutil list command command: {}".format(download_command))
blob_details_raw = subprocess.check_output(download_command).decode()
regexp = r"(\d+) +\d\d\d\d-\d\d-\d\dT\d\d:\d\d:\d\dZ +gs:\/\/\S+?\/(\S+)"
# re.finditer returns a generator so we don't duplicate memory need in case the string is quite large
cum_batch_size = 0
batch = []
batch_nr = 1
for reg_match in re.finditer(regexp, blob_details_raw):
blob_name = reg_match.group(2)
byte_size = int(reg_match.group(1))
batch.append(blob_name)
cum_batch_size += byte_size
if cum_batch_size > max_batch_size:
yield batch
batch = []
cum_batch_size = 0
batch_nr += 1
logger.info("Created {} batches with roughly max batch size = {} bytes".format(batch_nr, int(max_batch_size)))
if batch:
yield batch # if we still have a batch left, then it must also be yielded
@staticmethod
def download_batch_into_memory(batch, bucket, inclue_metadata=True, max_threads=10):
"""Given a batch of storage filenames, download them into memory.
Downloading the files in a batch is multithreaded.
:param batch: A list of gs:// filenames to download.
:type batch: list of str
:param bucket: The google api pucket.
:type bucket: google.cloud.storage.bucket.Bucket
:param inclue_metadata: True to inclue metadata
:type inclue_metadata: bool
:param max_threads: Number of threads to use for downloading batch. Don't increase this over 10.
:type max_threads: int
:return: Complete blob contents and metadata.
:rtype: dict
"""
def download_blob(blob_name, state):
"""Standalone function so that we can multithread this."""
blob = bucket.blob(blob_name=blob_name)
content = json.loads(blob.download_as_string())
if inclue_metadata:
blob.reload()
metadata = blob.metadata
if metadata:
state[blob_name] = {**content, **metadata}
state[blob_name] = content
batch_data = {bn: {} for bn in batch}
threads = []
active_thread_count = 0
for blobname in batch:
thread = threading.Thread(target=download_blob, kwargs={"blob_name": blobname, "state": batch_data})
threads.append(thread)
thread.start()
active_thread_count += 1
if active_thread_count == max_threads:
# finish up threads in batches of size max_threads. A better implementation would be a queue
# from which the threads can feed, but this is good enough if the blob size is roughtly the same.
for thread in threads:
thread.join()
threads = []
active_thread_count = 0
# wait for the last of the threads to be finished
for thread in threads:
thread.join()
return batch_data
def multiprocess_batches(self, batches, max_processes=None):
"""Spawn parallel process for downloading and processing batches.
:param batches: An iterable of batches, probably a Generator.
:type batches: Iterable
:param max_processes: Maximum number of processes to spawn. None for cpu_count
:type max_processes: int or None
:return: The response form all the processes.
:rtype: dict
"""
if max_processes is None:
max_processes = multiprocessing.cpu_count() - 2
logger.info("Using {} processes to process batches".format(max_processes))
def single_proc(mp_batch, mp_bucket, batchresults):
"""Standalone function so that we can multiprocess this."""
proc_res = self.download_batch_into_memory(mp_batch, mp_bucket)
batchresults.update(proc_res)
pool = multiprocessing.Pool(processes=max_processes)
batch_results = multiprocessing.Manager().dict()
jobs = []
for batch in batches:
logger.info("Processing batch with {} elements".format(len(batch)))
# the client is not thread safe, so need to recreate the client for each process.
bucket = storage.Client().get_bucket(self.bucket_name)
proc = pool.Process(
target=single_proc,
kwargs={"mp_batch": batch, "mp_bucket": bucket, "batchresults": batch_results}
)
jobs.append(proc)
proc.start()
for job in jobs:
job.join()
logger.info("finished downloading {} blobs".format(len(batch_results)))
return batch_results
def bulk_download_as_dict(self, fetch_pattern):
"""Download blobs from google storage to
:param fetch_pattern: A gsutil storage pattern.
:type fetch_pattern: str
:return: A dict with k,v pairs = {blobname: blob_data}
:rtype: dict
"""
start = datetime.datetime.now()
filename_batches = self.create_blob_batches_by_pattern(fetch_pattern)
downloaded_data = self.multiprocess_batches(filename_batches)
logger.info("time taken to download = {}".format(datetime.datetime.now() - start))
return downloaded_data
if __name__ == '__main__':
stor = StorageDownloader("mybucket")
data = stor.bulk_download_as_dict("some_prefix*")
关于python-3.x - Google Storage python api 并行下载,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51508478/
问题故障解决记录 -- Java RMI Connection refused to host: x.x.x.x .... 在学习JavaRMI时,我遇到了以下情况 问题原因:可
我正在玩 Rank-N-type 并尝试输入 x x .但我发现这两个函数可以以相同的方式输入,这很不直观。 f :: (forall a b. a -> b) -> c f x = x x g ::
这个问题已经有答案了: How do you compare two version Strings in Java? (31 个回答) 已关闭 8 年前。 有谁知道如何在Java中比较两个版本字符串
这个问题已经有答案了: How do the post increment (i++) and pre increment (++i) operators work in Java? (14 个回答)
下面是带有 -n 和 -r 选项的 netstat 命令的输出,其中目标字段显示压缩地址 (127.1/16)。我想知道 netstat 命令是否有任何方法或选项可以显示整个目标 IP (127.1.
我知道要证明 : (¬ ∀ x, p x) → (∃ x, ¬ p x) 证明是: theorem : (¬ ∀ x, p x) → (∃ x, ¬ p x) := begin intro n
x * x 如何通过将其存储在“auto 变量”中来更改?我认为它应该仍然是相同的,并且我的测试表明类型、大小和值显然都是相同的。 但即使 x * x == (xx = x * x) 也是错误的。什么
假设,我们这样表达: someIQueryable.Where(x => x.SomeBoolProperty) someIQueryable.Where(x => !x.SomeBoolProper
我有一个字符串 1234X5678 我使用这个正则表达式来匹配模式 .X|..X|X. 我得到了 34X 问题是为什么我没有得到 4X 或 X5? 为什么正则表达式选择执行第二种模式? 最佳答案 这里
我的一个 friend 在面试时遇到了这个问题 找到使该函数返回真值的 x 值 function f(x) { return (x++ !== x) && (x++ === x); } 面试官
这个问题在这里已经有了答案: 10年前关闭。 Possible Duplicate: Isn't it easier to work with foo when it is represented b
我是 android 的新手,我一直在练习开发一个针对 2.2 版本的应用程序,我需要帮助了解如何将我的应用程序扩展到其他版本,即 1.x、2.3.x、3 .x 和 4.x.x,以及一些针对屏幕分辨率
为什么案例 1 给我们 :error: TypeError: x is undefined on line... //case 1 var x; x.push(x); console.log(x);
代码优先: # CASE 01 def test1(x): x += x print x l = [100] test1(l) print l CASE01 输出: [100, 100
我正在努力温习我的大计算。如果我有将所有项目移至 'i' 2 个空格右侧的函数,我有一个如下所示的公式: (n -1) + (n - 2) + (n - 3) ... (n - n) 第一次迭代我必须
给定 IP 字符串(如 x.x.x.x/x),我如何或将如何计算 IP 的范围最常见的情况可能是 198.162.1.1/24但可以是任何东西,因为法律允许的任何东西。 我要带198.162.1.1/
在我作为初学者努力编写干净的 Javascript 代码时,我最近阅读了 this article当我偶然发现这一段时,关于 JavaScript 中的命名空间: The code at the ve
我正在编写一个脚本,我希望避免污染 DOM 的其余部分,它将是一个用于收集一些基本访问者分析数据的第 3 方脚本。 我通常使用以下内容创建一个伪“命名空间”: var x = x || {}; 我正在
我尝试运行我的test_container_services.py套件,但遇到了以下问题: docker.errors.APIError:500服务器错误:内部服务器错误(“ b'{” message
是否存在这两个 if 语句会产生不同结果的情况? if(x as X != null) { // Do something } if(x is X) { // Do something } 编
我是一名优秀的程序员,十分优秀!