Application of Generative Artificial Intelligence (GenAI) in Law Enforcement Research

doi: 10.32577/MR.2025.4.10

Abstract

Introduction: Generative Artificial Intelligence (GenAI) represents one of the most significant technological innovations of the 21st century. While its application in law enforcement is still in its early stages, it has the potential to become a long-term strategic tool in law enforcement research.

Objectives: This study aims to facilitate the domestic adoption of GenAI technologies in the law enforcement sector. It explores the key technological components necessary for the effective implementation of GenAI-based research, including training data, context window size, prompt strategies, hallucination risks, and validation processes. Additionally, it addresses the responsible application of GenAI, considerations of scientific reliability, and the transformation of researchers’ roles. The technological potential is illustrated through recent international law enforcement research examples.

Methodology: The study is based on a targeted review of over 40 scientific and policy documents from the period 2016–2025, employing a multilingual search strategy.

Results: The research identified the core technological components that determine the usability and reliability of generative artificial intelligence. The application of GenAI in law enforcement has already emerged in practice, but international examples are typically linked to pilot projects and are regarded more as technological curiosities than routine tools. International case studies showcase innovative GenAI applications, such as identifying psychological patterns - e.g., manipulative strategies, threatening or coercive communication forms - uncovering argumentation structures, and rapidly processing digital investigative documents.

Conclusion: The true potential of GenAI can unfold when users possess adequate knowledge of its functioning and are capable of applying it consciously within regulated frameworks. The successful integration of GenAI into scientific work requires a methodological paradigm shift and an expansion of researchers' roles in directions where the researcher simultaneously functions as a technological tool user, critical validator, and ethical decision-maker. This approach ensures that results generated by GenAI remain scientifically grounded, ethically acceptable, and methodologically verifiable.

Keywords:

generative artificial intelligence GenAI law enforcement research prompt hallucination

How to Cite

Erdélyi, K. (2026). Application of Generative Artificial Intelligence (GenAI) in Law Enforcement Research. Hungarian Law Enforcement, 25(4), 177–198. https://doi.org/10.32577/MR.2025.4.10

References

ANDERSEN, Jens Peter et al. (2025): Generative AI in the Research Process – A Survey of Researchers’ Practices and Perceptions. SocArXiv. Online: https://doi.org/10.31235/osf.io/83whe

ANGWIN, Julia et al. (2016): Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks. ProPublica. Online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Artificial Analysis (2025): Independent Evaluations of AI Models and Providers. Online: https://artificialanalysis.ai

BAROCAS, Solon – SELBST, Andrew D. (2016): Big Data's Disparate Impact. California Law Review, 104, 671–732. Online: https://doi.org/10.2139/ssrn.2477899

BENDER, Emily M. et al. (2021): On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. Online: https://doi.org/10.1145/3442188.3445922

BODNÁR András Péter (2022): A digitális bizonyítékok megjelenése a büntetőeljárásban – különös tekintettel a szakértő igénybevételére. Büntetőjogi Szemle, 1, 19–29. Online: https://szakcikkadatbazis.hu/doc/6289137

CARLINI, Nicholas et al. (2021): Extracting Training Data from Large Language Models. USENIX Security Symposium. Online: https://arxiv.org/abs/2012.07805

COHEN, Jacob (1960): A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), 37–46. Online: https://doi.org/10.1177/001316446002000104

Common Crawl Foundation [é. n.]: Common Crawl. Online: https://commoncrawl.org

Elsevier (2025): Generative AI Policies for Journals. Online: https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals

EHTESHAM, HIRA (2025): AI Hallucination Report 2026: Which AI Hallucinates the Most? All About AI, 2025. december 4. Online: https://www.allaboutai.com/resources/ai-statistics/ai-hallucinations/#low-hallucination-group

Európai Parlament és Tanács (2024): Az Európai Parlament és a Tanács (EU) 2024/1689 rendelete a mesterséges intelligenciára vonatkozó harmonizált szabályok megállapításáról, valamint egyes korábbi jogszabályok módosításáról. Hivatalos Lap, L 2024/1689. Online: https://eur-lex.europa.eu/legal-content/HU/TXT/?uri=CELEX:32024R1689

GRÓF József (2025): Hallucinációk az MI-kutatásban. ITBusiness, 2024. június 4. Online: https://itbusiness.hu/technology/gadget/hallucinaciok-az-mi-kutatasban

HUYNH, Daniel (2023): Automatic Hallucination Detection with SelfCheckGPT NLI. Hugging Face, 2023. november 27. Online: https://huggingface.co/blog/dhuynh95/automatic-hallucination-detection

LI, Hui – WU, Xiaofeng (2025): The Use of Generative AI Tools in Academic Writing: A Systematic Review of Reseach Trends and Thematic Insights. AI and Ethics, 5, 5821–5840. Online: https://doi.org/10.1007/s43681-025-00827-0

MANAKUL, Potsawee – LIUSIE, Adian – GALES, Mark J. F. (2023): SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models. arXiv preprint arXiv:2303.08896. Online: https://doi.org/10.18653/v1/2023.emnlp-main.557

MAYNEZ, Joshua et al. (2020): On Faithfulness and Factuality in Abstractive Summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, July, 1906–1919. Online: https://doi.org/10.18653/v1/2020.acl-main.173

MEHRABI, Ninareh et al. (2021): A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), 1–35. Online: https://doi.org/10.1145/3457607

MOSQUEIRA-REY, Eduardo et al. (2023): Human-In-The-Loop Machine Learning: A State of the Art. Artificial Intelligence Review, 56(4), 3005–3054. Online: https://doi.org/10.1007/s10462-022-10246-w

MUÑOZ-SORO, José Félix et al. (2024): A Neural Network to Identify Requests, Decisions, and Arguments in Court Rulings on Custody. Artificial Intelligence and Law, 33, 101–135. Online: https://doi.org/10.1007/s10506-023-09380-9

Nature Portfolio (2023): Tools Such as ChatGPT Threaten Transparent Science; Here Are Our Ground Rules for Their Use. Nature, 613, 612. Online: https://doi.org/10.1038/d41586-023-00191-1

NIKOLAKOPOULOS, Anastasios et al. (2024): Large Language Models in Modern Forensic Investigations: Harnessing the Power of Generative Artificial Intelligence in Crime Resolution and Suspect Identification. 2024 5th International Conference on Computer Science and Technologies in Education (CSTE). Online: https://ieeexplore.ieee.org/abstract/document/10654427

OpenAI (2019): WebText Dataset. Online: https://github.com/openai/gpt-2

OpenAI (2024): GPT-4 Technical Report: Performance Metrics and Computational Specifications. OpenAI Technical Publications. Online: https://arxiv.org/abs/2303.08774

PACCHIONI, Federico et al. (2025): Generative AI and Criminology: Threat or Promise? Exploring the Potential and Pitfalls of Identifying Techniques of Neutralization (Ton). PLOS ONE, 20(4), e0319793. Online: https://doi.org/10.1371/journal.pone.0319793

PEREZ, Fábio – RIBEIRO, Ian (2022): Ignore Previous Prompt: Attack Techniques for Language Models. arXiv preprint arXiv:2211.09527. Online: https://arxiv.org/abs/2211.09527

Pécsi Tudományegyetem Közgazdaságtudományi Kar (2025): A mesterséges intelligencia használatára vonatkozó irányelvek. Online: https://ktk.pte.hu/sites/ktk.pte.hu/files/uploads/szabalyzatok/mi-iranyelvek_pte-ktk.pdf

PRESS, Ofir – SMITH, Noah A. – LEWIS, Mike (2022): Train Short, Test Long: Attention With Linear Biases Enables Input Length Extrapolation. International Conference on Learning Representations (ICLR). Online: https://doi.org/10.48550/arXiv.2108.12409

RADFORD, Alec et al. (2018): Improving Language Understanding by Generative Pre-Training. OpenAI. Online: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf

RAFFEL, Colin et al. (2020): Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 21(140), 1–67. Online: https://arxiv.org/abs/1910.10683

RAJI, Inioluwa Deborah et al. (2020): Closing the AI Accountability Gap: Defining an End-To-End Framework for Internal Algorithmic Auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33–44. Online: https://doi.org/10.1145/3351095.3372873

RAYNER, Keith et al. (2016): So Much to Read, so Little Time: How Do We Read, and Can Speed Reading Help? Psychological Science in the Public Interest, 17(1), 4–34. Online: https://doi.org/10.1177/1529100615623267

RAMIREZ-CAMARA, Ellie (2024): Iris.ai Secured €7.64M to Optimize its AI Engine for Scientific Text Understanding. Dataphoenix, 2024. június 3. Online: https://dataphoenix.info/iris-ai-secured-eu7-64m-to-optimize-its-ai-engine-for-scientific-text-understanding/

SENNRICH, Rico – HADDOW, Barry – BIRCH, Alexandra (2016): Neural Machine Translation of Rare Words With Subword Units. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 1, 1715–1725. Online: https://doi.org/10.18653/v1/P16-1162

VASWANI, Ashish et al. (2017): Attention is All You Need. Advances in Neural Information Processing Systems, 30. Online: https://research.google/pubs/attention-is-all-you-need/

Vectara (2025): Hallucination Leaderboard: Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents. GitHub. https://github.com/vectara/hallucination-leaderboard

ZOU, Andy et al. (2023): Universal and Transferable Adversarial Attacks on Aligned Language Models. arXiv preprint arXiv:2307.15043. Online: https://arxiv.org/abs/2307.15043

Downloads

Download data is not yet available.