Future Perspectives
1、Better generalization
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks.
Next, by modifying the classification instructions of the Chinese data set, further instruction tuning is performed in the mixed task model to improve the performance of the model on unknown content.