Abstract: Chinese named entity recognition (NER) has been an important problem in natural language processing (NLP) field. Most existing methods mainly use traditional deep learning models which cannot fully leverage contextual de-pendencies that are very important for capturing the relations between words or characters for modeling. To address this problem, various language representa-tion methods such as BERT have been proposed to learn the global context in-formation. Although these methods can achieve good results, the large number of parameters limited the efficiency and application in real-world scenarios. To improve both of the performance and efficiency, this paper proposes an ALBERT-based Chinese NER method which uses ALBERT, a Lite version of BERT, as the pre-trained model to reduce model parameters and to improve the performance through sharing cross-layer parameters. Besides, it uses conditional random field (CRF) to capture the sentence-level correlation information between words or characters to alleviate the tagging inconsistency problems. Experimental results demonstrate that our method outperforms the comparison methods over 4.23-11.17% in terms of relative F1-measure with only 4% of BERT's parameters.
Authors: Haifeng Lv (Kingdee International Software Group Company Limited, China); Yishuang Ning (Kingdee International Software Group Company Limited); Ke Ning (Kingdee International Software Group Company Limited, China)
Email: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org