gpt4 book ai didi

LMM Fine Tuning - Supervised Fine Tuning Trainer (SFTTrainer) vs transformers Trainer(LMM精调-监督精调训练器(SFTTrainer)与变压器训练器)

转载 作者:bug小助手 更新时间:2023-10-26 21:29:33 35 4
gpt4 key购买 nike



When should one opt for the Supervised Fine Tuning Trainer (SFTTrainer) instead of the regular Transformers Trainer when it comes to instruction fine-tuning for Language Models (LLMs)? From what I gather, the regular Transformers Trainer typically refers to unsupervised fine-tuning, often utilized for tasks such as Input-Output schema formatting after conducting supervised fine-tuning. There seem to be various examples of fine-tuning tasks with similar characteristics, but with some employing the SFTTrainer and others using the regular Trainer. Which factors should be considered in choosing between the two approaches?

当涉及到语言模型(LLM)的教学微调时,什么时候应该选择有监督的微调培训器(SFTTrainer)而不是常规的Transformers培训器?据我所知,常规的Transformers培训人员通常指的是无人监督的微调,通常用于在进行有监督的微调之后进行输入-输出模式格式化之类的任务。似乎有各种具有相似特征的微调任务的例子,但有些使用SFTTrainer,另一些使用常规训练器。在选择这两种方法时,应该考虑哪些因素?


I looking for Fine Tuning a LLM for generating json to json transformation (matching texts in json) using huggingface and trl libraries.

我寻找精调生成JSON到JSON转换(匹配JSON中的文本)使用HuggingFace和Trl库的LLM。


更多回答

Welcome to Stackoverflow, please take a look at stackoverflow.com/help/how-to-ask. Most probably people are going to close this issue since it's not coding related. Try asking it on datascience.stackexchange.com, folks might be nicer there to ML recommendations.

欢迎来到Stackoverflow,请访问Stackoverflow.com/Help/How-to-Ask。人们很可能会关闭这个问题,因为它与编码无关。试着在data cience.stackexchange.com上问一下,人们可能会对ML的推荐更友好。

Thanks @alvas. I asked it there datascience.stackexchange.com/questions/122164/…

谢谢@阿尔瓦斯。我在那里问的,datascience.stackexchange.com/questions/122164/…

优秀答案推荐
更多回答

35 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com