Skip to content

An open source package for performing supervised learning in structured prediction tasks. It implements conditional random fields (CRFs) models from (1) first-order to higher-order linear-chain CRFs, and from (2) first-order to higher-order semi-Markov CRFs (semi-CRFs)

License

bratao/PySeqLab

Repository files navigation

PySeqLab

This is a mirror from https://bitbucket.org/A_2/pyseqlab/

Authors are Allam A and Krauthammer M.

PySeqLab is an open source package for performing supervised learning in structured prediction tasks. It implements conditional random fields (CRFs) models from (1) first-order to higher-order linear-chain CRFs, and from (2) first-order to higher-order semi-Markov CRFs (semi-CRFs).

Documentation & Tutorials

Documentation and tutorials are available through readthedocs website.

References

PySeqLab implements/features models and optimization methods reported in the following references.

  • Bottou, L., & Le Cun, Y. (2004). Large Scale Online Learning. Advances in Neural Information Processing Systems, 16, 217–225.
  • Collins, M. (2002). Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing - EMNLP ’02 (pp. 1–8). http://dx.doi.org/10.3115/1118693.1118694
  • Cuong, V. N., Ye, N., Lee, W. S., & Chieu, H. L. (2014). Conditional Random Field with High-order Dependencies for Sequence Labeling and Segmentation. Journal of Machine Learning Research, 15, 981–1009.
  • Huang, L., Fayong, S., & Guo, Y. (2012). Structured perceptron with inexact search. 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 142–151. Retrieved from http://dl.acm.org/citation.cfm?id=2382029.2382049
  • Johnson, R., & Zhang, T. (2013). Accelerating Stochastic Gradient Descent using Predictive Variance Reduction. Advances in Neural Information Processing Systems 26, 1(3), 315–323. Retrieved from http://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf
  • Kim, J.-D., Ohta, T., Tsuruoka, Y., Tateisi, Y., & Collier, N. (2004). Introduction to the Bio-entity Recognition Task at JNLPBA. Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications, 70–75. http://dx.doi.org/10.3115/1567594.1567610
  • Lafferty, J., McCallum, A., & Pereira, F. C. N. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. ICML ’01 Proceedings of the Eighteenth International Conference on Machine Learning, 8(June), 282–289. http://dx.doi.org/10.1038/nprot.2006.61
  • Sarawagi, S., & Cohen, W. W. (2005). Semi-Markov Conditional Random Fields for Information Extraction. Advances in Neural Information Processing Systems, 1185–1192. http://dx.doi.org/10.1.1.128.3524
  • Soong, F. K., & Huang, E.-F. (1990). A tree-trellis based fast search for finding the N Best sentence hypotheses in continuous speech recognition. Proceedings of the Workshop on Speech and Natural Language - HLT ’90, 12–19. http://dx.doi.org/10.3115/116580.116591
  • Sun, X. (2015). Towards Shockingly Easy Structured Classification: A Search-based Probabilistic Online Learning Framework. Retrieved from http://arxiv.org/abs/1503.08381
  • Tsuruoka, Y., Tsujii, J., & Ananiadou, S. (2009). Stochastic gradient descent training for L1-regularized log-linear models with cumulative penalty. Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 1, 477. http://dx.doi.org/10.3115/1687878.1687946
  • Vieira, T., Cotterell, R., & Eisner, J. (2016). Speed-Accuracy Tradeoffs in Tagging with Variable-Order CRFs and Structured Sparsity. In EMNLP.
  • Viterbi, A. (1967). Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory, 13(2), 260–269. http://dx.doi.org/10.1109/TIT.1967.1054010
  • Ye, N., Lee, W. S., Chieu, H. L., & Wu, D. (2009). Conditional Random Fields with High-Order Features for Sequence Labeling. Neural Information Processing Systems, 2, 2. Retrieved from http://www.comp.nus.edu.sg/~leews/publications/nips09_paper.pdf
  • Zeiler, M. D. (2012). ADADELTA: An Adaptive Learning Rate Method. Retrieved from http://arxiv.org/abs/1212.5701

More info about the models and optimization algorithms included in PySeqLab package is reported in a submitted paper:

Allam A, Krauthammer M. PySeqLab an open source Python package for sequence labeling and segmentation.

About

An open source package for performing supervised learning in structured prediction tasks. It implements conditional random fields (CRFs) models from (1) first-order to higher-order linear-chain CRFs, and from (2) first-order to higher-order semi-Markov CRFs (semi-CRFs)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages