2 模型蒸馏

从WORD2VEC到T5,参数量级增加上亿倍,这种装备向来是资深玩家才买得起的装备。因此出现了模型蒸馏,尽量减少模型效果降低的前提下,对模型进行蒸馏,降低模型的参数。对于上面的albert实现了transformer各个block的参数共享,也算是蒸馏中的一种。

2.1 fastbert

参考文献

Lan, Zhenzhong, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. “ALBERT: A Lite Bert for Self-Supervised Learning of Language Representations.”

Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. “RoBERTa: A Robustly Optimized Bert Pretraining Approach,” July.

Radford, Alec, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. “Improving Language Understanding by Generative Pre-Training.”

Raffel, Colin, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.”

“TRANSFORMER详解.” 2021. 2021. https://www.bilibili.com/video/BV1Di4y1c7Zm?p=7.

Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” arXiv.

“Xlnet与bert的异同.” 2019. 2019. https://zhuanlan.zhihu.com/p/70257427.

Yang, Zhilin, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. “XLNet: Generalized Autoregressive Pretraining for Language Understanding.” http://arxiv.org/abs/1906.08237.