When should one opt for the Supervised Fine Tuning Trainer (SFTTrainer) instead of the regular Transformers Trainer when it comes to instruction fine-tuning for Language Models (LLMs)? From what I gather, the regular Transformers Trainer typically refers to unsupervised fine-tuning, often utilized for tasks such as Input-Output schema formatting after conducting supervised fine-tuning. There seem to be various examples of fine-tuning tasks with similar characteristics, but with some employing the SFTTrainer and others using the regular Trainer. Which factors should be considered in choosing between the two approaches?
当涉及到语言模型(LLM)的教学微调时,什么时候应该选择有监督的微调培训器(SFTTrainer)而不是常规的Transformers培训器?据我所知,常规的Transformers培训人员通常指的是无人监督的微调,通常用于在进行有监督的微调之后进行输入-输出模式格式化之类的任务。似乎有各种具有相似特征的微调任务的例子,但有些使用SFTTrainer,另一些使用常规训练器。在选择这两种方法时,应该考虑哪些因素?
I looking for Fine Tuning a LLM for generating json to json transformation (matching texts in json) using huggingface and trl libraries.
我寻找精调生成JSON到JSON转换(匹配JSON中的文本)使用HuggingFace和Trl库的LLM。
更多回答
Welcome to Stackoverflow, please take a look at stackoverflow.com/help/how-to-ask. Most probably people are going to close this issue since it's not coding related. Try asking it on datascience.stackexchange.com, folks might be nicer there to ML recommendations.
欢迎来到Stackoverflow,请访问Stackoverflow.com/Help/How-to-Ask。人们很可能会关闭这个问题,因为它与编码无关。试着在data cience.stackexchange.com上问一下,人们可能会对ML的推荐更友好。
优秀答案推荐
我是一名优秀的程序员,十分优秀!