Feature Optimization for Constituent Parsing via Neural Networks @ ACL 2015

Zhiguo Wang, Haitao Mi, and Nianwen Xue. 2015. Feature Optimization for Constituent Parsing via Neural Networks. In ACL. pp. 1138–1147.

对于判别式句法分析,特征模版的设计对性能非常重要,很多工作都是根据经验和实验设计特征来改进句法分析,但这种方法一方面比较单调无趣,另一方面也无法决定哪些特征是最优的,即使在测试数据上却的好的结果。

针对这种问题,Wang 采用基本的前向神经网络进行训练,对于词采用continue representation,而每个特征采[……]

阅读全文

用RNN读懂序列

Deep neural networks have recently conquered a number of major problems in machine perception. Able to learn representations, the methods are especially effective in domains where any given feature in the raw data is uninformative. Consider, for example, an image with dimension 300x300pixels. Any one pixel by itself effectively meaningless. By computing successive layers of representation, however, a neural network is able to identify hierarchical features, like edge detectors at a low level, and faces at a high level which enable accurate classification of images into abstract object categories.[……]

阅读全文

An Empirical Exploration of Recurrent Network Architectures @ ICML 2014

Józefowicz, R., Zaremba, W., & Sutskever, I. An Empirical Exploration of Recurrent Network Architectures. ICML, pp. 2342–2350, 2014.

Recurrent Neural Network 是一种非常有效的用于序列任务的神经网络,但往往难以训练,存在梯度exploding和vanishing的问题,而用gradient clipping可以有效解决exploding的问题,Long Short-Term Memory结构的RNN可以处理vanishing的问题[……]

阅读全文

反向传导算法 (back propagation)

反向传导算法 (Back Propagation) 最早出现在70年代,但直到1986年由David Rumelhart,Geoffrey Hinton和Ronald Williams发表的论文“Learning representations by back-propagating errors”,其重要性才得到普遍关注。这篇重要论文描述了利用BP算法实现神经网络,比早前的学习方法更快,从而可以使用神经网络解决一些之前无法解决的问题。今天,BP算法成为神经网络和深度学习的入门基础。[……]

阅读全文