I am not very experienced with unsupervised learning, but my general understanding is that in unsupervised learning, the model learns without there being an output. However, during pre-training in models such as BERT or GPT-3, it seems to me that there is an output. For example, in BERT, some of the tokens in the input sequence are masked. Then, the model will try to predict those words. Since we already know what those masked words originally were, we can compare that with the prediction to find the loss. Isn't this basically supervised learning?
我对无监督学习不是很有经验,但我的一般理解是,在无监督学习中,模型在没有输出的情况下学习。然而,在像BERT或GPT-3这样的型号的预培训期间,在我看来,这是有输出的。例如,在BERT中,输入序列中的一些令牌被屏蔽。然后,该模型将尝试预测这些单词。既然我们已经知道这些被屏蔽的词最初是什么,我们可以将其与预测进行比较,以找到损失。这不是基本上是有监督的学习吗?
更多回答
优秀答案推荐
The pre-training phase of models like GPT is considered unsupervised because it learns from unlabeled data. In contrast, supervised learning relies on labeled data, which typically requires manual annotation.
像GPT这样的模型的前训练阶段被认为是无监督的,因为它从未标记的数据中学习。相比之下,监督学习依赖于标记的数据,这通常需要手动注释。
Take document classification as an example. Imagine having tons of books, but none tell you their subjects. Labeling them for supervised learning would be hard, time-consuming, and expensive, and costly. Imagine having to go through each book and manually write down its subject.
以文档分类为例。想象一下,有一大堆书,但没有一本告诉你它们的主题。给它们贴上监督学习的标签将是困难的、耗时的、昂贵的和昂贵的。想象一下,你不得不仔细阅读每本书,然后手动写下它的主题。
This distinction is outlined in the Improving Language Understanding
by Generative Pre-Training paper.
这一区别在《通过生成性预训提高语言理解》一文中得到了概述。
To tell the truth, I also thought the pre-training phase was supervised training. And interestingly, in this video Andrej Karpathy explains the masked token can be used as a source of supervision to update the transformer's weights:
说实话,我也认为前期培训是有监督的培训。有趣的是,在这段视频中,安德烈·卡帕西解释说,蒙面令牌可以用作监督来源,以更新变压器的重量:
更多回答
我是一名优秀的程序员,十分优秀!