Author
Dr.N.Sevugapandi, K.Ponraj
Keywords
Indian Penal Code (IPC); First Information Report (FIR); Natural Language Processing (NLP); Bi-LSTM; RoBERTa; Hybrid Deep Learning; Text Classification; Legal Text Analysis; IPC Section Prediction; Transformer Model; Sequential Learning; Contextual Embeddings.
Abstract
This paper presents an intelligent system for automatic prediction of Indian Penal Code (IPC) sections based on FIR (First Information Report) text. The system uses Natural Language Processing (NLP) techniques combined with a hybrid deep learning model consisting of Bi-LSTM and RoBERTa. The model analyzes case descriptions and predicts the most relevant IPC section along with confidence score. Additionally, the system provides automated legal explanation for better understanding. The proposed system improves accuracy compared to traditional machine learning models and helps in faster legal decision support. This solution can assist police departments, legal professionals, and judicial systems in reducing manual effort and improving efficiency. The legal domain involves complex analysis of textual data such as FIR (First Information Reports), which requires expertise and time to determine the appropriate IPC sections. This paper proposes an intelligent system that automates the prediction of IPC sections using Natural Language Processing (NLP) and a hybrid deep learning model combining Bi-LSTM and RoBERTa. The system processes unstructured legal text, extracts meaningful features, and predicts the most relevant IPC sections with a confidence score. Additionally, the system provides automatic IPC explanations from a structured database, improving interpretability. Experimental results demonstrate that the hybrid approach significantly improves prediction accuracy compared to traditional machine learning methods. The proposed system can assist law enforcement agencies, legal practitioners, and judicial systems in enhancing efficiency and reducing manual effort.
References
[1] Y. Goldberg, “A Primer on Neural Network Models for Natural Language Processing,” Journal of Artificial Intelligence Research, vol. 57, pp. 345–420, 2016.
[2] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of NAACL-HLT, 2019.
[3] Y. Liu et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” arXiv preprint arXiv:1907.11692, 2019.
[4] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[5] Graves, “Supervised Sequence Labelling with Recurrent Neural Networks,” Springer, 2012.
[6] T. Mikolov et al., “Efficient Estimation of Word Representations in Vector Space,” arXiv preprint arXiv:1301.3781, 2013.
[7] S. Bird, E. Klein, and E. Loper, “Natural Language Processing with Python,” O’Reilly Media, 2009.
[8] D. Jurafsky and J. H. Martin, “Speech and Language Processing,” 3rd ed., Pearson, 2020.
[9] Indian Penal Code, 1860, Government of India.
[10] A.Vaswani et al., “Attention Is All You Need,” in Advances in Neural Information Processing Systems (NeurIPS), 2017.
[11] F. Chollet, “Deep Learning with Python,” Manning Publications, 2018.
[12] T. Brown et al., “Language Models are Few-Shot Learners,” in NeurIPS, 2020.
[2] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of NAACL-HLT, 2019.
[3] Y. Liu et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” arXiv preprint arXiv:1907.11692, 2019.
[4] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[5] Graves, “Supervised Sequence Labelling with Recurrent Neural Networks,” Springer, 2012.
[6] T. Mikolov et al., “Efficient Estimation of Word Representations in Vector Space,” arXiv preprint arXiv:1301.3781, 2013.
[7] S. Bird, E. Klein, and E. Loper, “Natural Language Processing with Python,” O’Reilly Media, 2009.
[8] D. Jurafsky and J. H. Martin, “Speech and Language Processing,” 3rd ed., Pearson, 2020.
[9] Indian Penal Code, 1860, Government of India.
[10] A.Vaswani et al., “Attention Is All You Need,” in Advances in Neural Information Processing Systems (NeurIPS), 2017.
[11] F. Chollet, “Deep Learning with Python,” Manning Publications, 2018.
[12] T. Brown et al., “Language Models are Few-Shot Learners,” in NeurIPS, 2020.
Received : 03 February 2026
Accepted : 08 April 2026
Published : 12 April 2026
DOI: 10.30726/esij/v13.i2.2026.132006