- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试连接到 Databricks 上的 Spark 集群,并且正在学习本教程:https://docs.databricks.com/dev-tools/dbt.html .我安装了 dbt-databricks
连接器 ( https://github.com/databricks/dbt-databricks )。但是,无论我如何配置它,当我运行 dbt test
/dbt debug
时,我总是收到“数据库错误,无法连接”。
这是我的profiles.yaml
:
databricks_cluster:
outputs:
dev:
connect_retries: 5
connect_timeout: 60
host: <my_server_hostname>
http_path: <my_http_path>
schema: default
token: <my_token>
type: databricks
target: dev
这是我的dbt_project.yml
:
# Name your project! Project names should contain only lowercase characters
# and underscores. A good package name should reflect your organization's
# name or the intended use of these models
name: 'dbt_dem'
version: '1.0.0'
config-version: 2
# This setting configures which "profile" dbt uses for this project.
profile: 'databricks_cluster'
# These configurations specify where dbt should look for different types of files.
# The `model-paths` config, for example, states that models in this project can be
# found in the "models/" directory. You probably won't need to change these!
model-paths: ["models"]
analysis-paths: ["analyses"]
test-paths: ["tests"]
seed-paths: ["seeds"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
target-path: "target" # directory which will store compiled SQL files
clean-targets: # directories to be removed by `dbt clean`
- "target"
- "dbt_packages"
# Configuring models
# Full documentation: https://docs.getdbt.com/docs/configuring-models
# In this example config, we tell dbt to build all models in the example/ directory
# as tables. These settings can be overridden in the individual model files
# using the `{{ config(...) }}` macro.
models:
dbt_dem:
# Config indicated by + and applies to all files under models/example/
example:
+materialized: view
我也尝试过使用 spark
连接器,但我仍然遇到同样的错误。关于为什么我无法连接到 Databricks 集群的任何想法?
这些是错误对应的日志:
============================== 2022-02-18 08:43:22.123066 | 4b91f9d3-28ad-4f5a-93db-f431b6d9af14 ==============================
08:43:22.123066 [info ] [MainThread]: Running with dbt=1.0.1
08:43:22.123841 [debug] [MainThread]: running dbt with arguments Namespace(cls=<class 'dbt.task.debug.DebugTask'>, config_dir=False, debug=None, defer=None, event_buffer_size=None, fail_fast=None, log_cache_events=False, log_format=None, partial_parse=None, printer_width=None, profile=None, profiles_dir='/Users/keremaslan/.dbt', project_dir=None, record_timing_info=None, rpc_method=None, send_anonymous_usage_stats=None, single_threaded=False, state=None, static_parser=None, target=None, use_colors=None, use_experimental_parser=None, vars='{}', version_check=None, warn_error=None, which='debug', write_json=None)
08:43:22.124057 [debug] [MainThread]: Tracking: tracking
08:43:22.143750 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'invocation', 'label': 'start', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7fb751ef42e0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7fb751ef4eb0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7fb751eca730>]}
08:43:22.236001 [debug] [MainThread]: Executing "git --help"
08:43:22.264682 [debug] [MainThread]: STDOUT: "b"usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]\n [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]\n [-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]\n [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]\n <command> [<args>]\n\nThese are common Git commands used in various situations:\n\nstart a working area (see also: git help tutorial)\n clone Clone a repository into a new directory\n init Create an empty Git repository or reinitialize an existing one\n\nwork on the current change (see also: git help everyday)\n add Add file contents to the index\n mv Move or rename a file, a directory, or a symlink\n restore Restore working tree files\n rm Remove files from the working tree and from the index\n sparse-checkout Initialize and modify the sparse-checkout\n\nexamine the history and state (see also: git help revisions)\n bisect Use binary search to find the commit that introduced a bug\n diff Show changes between commits, commit and working tree, etc\n grep Print lines matching a pattern\n log Show commit logs\n show Show various types of objects\n status Show the working tree status\n\ngrow, mark and tweak your common history\n branch List, create, or delete branches\n commit Record changes to the repository\n merge Join two or more development histories together\n rebase Reapply commits on top of another base tip\n reset Reset current HEAD to the specified state\n switch Switch branches\n tag Create, list, delete or verify a tag object signed with GPG\n\ncollaborate (see also: git help workflows)\n fetch Download objects and refs from another repository\n pull Fetch from and integrate with another repository or a local branch\n push Update remote refs along with associated objects\n\n'git help -a' and 'git help -g' list available subcommands and some\nconcept guides. See 'git help <command>' or 'git help <concept>'\nto read about a specific subcommand or concept.\nSee 'git help git' for an overview of the system.\n""
08:43:22.265387 [debug] [MainThread]: STDERR: "b''"
08:43:22.272505 [debug] [MainThread]: Acquiring new databricks connection "debug"
08:43:22.273434 [debug] [MainThread]: Using databricks connection "debug"
08:43:22.273833 [debug] [MainThread]: On debug: select 1 as id
08:43:22.274044 [debug] [MainThread]: Opening a new connection, currently in state init
08:43:22.888586 [debug] [MainThread]: Databricks adapter: Error while running:
select 1 as id
08:43:22.889031 [debug] [MainThread]: Databricks adapter: Database Error
failed to connect
08:43:22.889905 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'invocation', 'label': 'end', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7fb751f7eaf0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7fb752113040>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7fb7521130a0>]}
08:43:24.130154 [debug] [MainThread]: Connection 'debug' was properly closed.
最佳答案
检查您的 profiles.yml(通常在此处找到:~/.dbt/profiles.yml
)并确保有:
https://
和例如:
host: server.name.com # instead of https://server.name.com
http_path: /sql/protocolv1/o/0/0000-000000-text000 # note the leading slash
如果您像我一样直接从集群配置页面复制 http_path 并直接从浏览器 URL 复制主机,那么这些很容易犯错误。
来自 dbt-databricks 自述文件的 profiles.yml 的另一个示例:https://github.com/databricks/dbt-databricks
关于databricks - 无法将 dbt 连接到 Databricks,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/71020949/
我已经开始阅读 Databricks 推出的 Unity Catalog。我了解它试图解决的基本问题,但我不了解目录到底是什么。 这在 Databricks 文档中可用, A catalog cont
我正在努力了解 Databricks。 我发现文档逐步从 S3 或 Azure Datalake 导入数据,然后输出到 Azure Synapse Analytics 或其他数据仓库解决方案。 快速播
我想以编程方式将(Python Wheel)库添加到 /Shared Databricks 上的工作区。在 GUI(工作区 > 导入 > 库)中很容易做到,但我无法弄清楚如何在 Databricks
我正在创建一个带有公司 Logo 的 databricks 笔记本模板。使用以下代码显示图像会引发错误。 代码: %md 错误: HTTP ERROR 403: Invalid or missing
我将使用这张图片来形象化我的问题: Databricks1 在 Databricks 中创建数据库(和表)并将其数据存储在存储帐户中。在Databricks2中我想读取数据:Databricks2只有
有没有办法通过 python 笔记本确定现有的 Azure Databricks Secret Scope 是否由 Key Vault 或 Databricks 支持? dbutils.secrets
我正在尝试连接到 Databricks 上的 Spark 集群,并且正在学习本教程:https://docs.databricks.com/dev-tools/dbt.html .我安装了 dbt-d
我们可以使用Autoloader跟踪是否已从 S3 存储桶加载的文件。我关于 Autoloader 的问题:有没有办法读取 Autoloader 数据库以获取已加载文件的列表? 我可以在 AWS Gl
我们可以使用一些帮助来了解如何将 Spark Driver 和 worker 日志发送到 Azure Databricks 之外的目的地,例如Azure Blob 存储或使用 Eleastic-bea
将我的 Azure Databricks 从标准升级到主要,尝试开始使用 Databricks Delta: create table t using delta as select * from t
现在,databricks 自动加载器需要一个目录路径,从中加载所有文件。但是,如果其他类型的日志文件也开始进入该目录 - 有没有办法让 Autoloader 在准备数据帧时排除这些文件? df =
有人可以让我知道如何使用 databricks dbutils 从文件夹中删除所有文件。 我尝试了以下但不幸的是,Databricks 不支持通配符。 dbutils.fs.rm('adl://azu
我是 azure 的新手和databricks ,我学会了如何安装 blob 和利用,但我有一些疑问,而且我还没有找到任何文档的任何答案。所以请帮我解释一下: dbutils.fs.mount(
尝试遍历已安装的 Databricks 卷中的目录时遇到 ClassCastException。 java.lang.ClassCastException: com.databricks.backen
尝试遍历已安装的 Databricks 卷中的目录时遇到 ClassCastException。 java.lang.ClassCastException: com.databricks.backen
我正在运行 Databricks Community Edition,我想从以下 mnt 目录中删除文件 /mnt/driver-daemon/jars 我运行 dbutils 命令: dbutils
我已经在我的机器上创建了“.netrc”文件并尝试在 databricks rest api 调用下面。但它总是给出未经授权的错误。如何在 Databricks 中创建 .netrc 文件? curl
没有意识到 shift+enter 运行一个单元格。我正在写一个 delete from table 并按下 shift enter 删除了表中的所有数据。 最佳答案 在 Delta Lake 表中,
我需要访问 Azure Files来自 Azure Databricks .根据文档 Azure Blobs受支持,但我需要此代码来处理 Azure 文件: dbutils.fs.mount( s
我正在尝试使用服务主体从 Databricks 连接到 Synapse。 我已经在集群配置中配置了服务主体 fs.azure.account.auth.type..dfs.core.windows.n
我是一名优秀的程序员,十分优秀!