Large Language Models and Closed Information Systems in the Defence Sector

  • Róbert Karsa
  • Imre Négyesi
doi: 10.32567/hm.2025.2.5

Abstract

Artificial intelligence, especially large language models and machine vision systems, are bringing major changes to our everyday lives. This paper explores the opportunities and challenges of applying AI in the defence domain. We present the operation and training processes of language models, the capabilities and limitations of generative models, and advances in image processing, including visual large language models. We highlight the importance of vector embeddings and vector databases in information retrieval, and the role of data query-based text generation in reducing hallucinations. We also examine the costs of training language models and the opportunities in Hungary. The paper shows that the use of large language models in military science has significant potential, but that it is essential to build a dedicated, secure IT infrastructure to protect confidential data. By demonstrating a closed information system, we show how the defence sector can take advantage of the technology to ensure secure information management. In conclusion, we emphasise the need for long-term investment and continuous innovation in the defence sector.

Keywords:

artificial intelligence large language model hallucination vector database military science

How to Cite

Karsa, R., & Négyesi, I. (2026). Large Language Models and Closed Information Systems in the Defence Sector. Military Engineer, 20(2), 75–91. https://doi.org/10.32567/hm.2025.2.5

References

ALBAWI, Saad – MOHAMMED, Tareq Abed – AL-ZAWI, Saad (2017): Understanding of a Convolutional Neural Network. 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 1–6. Online: https://doi.org/10.1109/ICEngTechnol.2017.8308186

BIRHANE, Abeba et al. (2021): Science in the Age of Large Language Models. Nature Reviews Physics, 5, 277–280. Online: https://doi.org/10.1038/s42254-023-00581-4

BODA Mihály (2024): A kockázatkerülő háború és a bátorság a 20–21. század fordulóján. Honvédségi Szemle, 152(3), 113–125. Online: https://doi.org/10.35926/HSZ.2024.3.9

BROWN, Tom B. et al. (2020): Language Models are Few-Shot Learners. Online: https://doi.org/10.48550/arXiv.2005.14165

DEVLIN, Jacob et al. (2018): BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Online: https://arxiv.org/abs/1810.04805

FILIPPOVA, Katja (2020): Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data. In COHN, Trevor – HE, Yulan – LIU, Yang (szerk.): Findings of the Association for Computational Linguistics EMNLP 2020. [H. n.]: Association for Computational Linguistics, 864–870. Online: https://doi.org/10.18653/v1/2020.findings-emnlp.76

HUANG, Lei et al. (2025): A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Transactions on Office Information Systems, 43(2), 1–55. Online: https://doi.org/10.1145/3703155

JURAFSKY, Daniel – MARTIN, James H. (2023): N-gram Language Models. In JURAFSKY, Daniel – MARTIN, James H.: Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Third Edition draft. [H. n.]: [k. n.], 31–57. Online: https://web.stanford.edu/~jurafsky/slp3/3.pdf

KUKREJA, Sanjaj et al. (2023): Vector Databases and Vector Embeddings-Review. In 2023 International Workshop on Artificial Intelligence and Image Processing (IWAIIP). Online: https://doi.org/10.1109/IWAIIP58158.2023.10462847

LEWIS, Patrick (2020): Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In LAROCHELLE H. et al. (szerk): Advances in Neural Information Processing Systems 33 (NeurIPS 2020). Online: https://arxiv.org/pdf/2005.11401

LUCCIONI, Aleksandra Sasha – VIGUIER, Sylvain – LIGOZAT, Anne-Laure (2023): Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model. Journal of Machine Learning Research, 24, 1–15. Online: https://doi.org/10.48550/arXiv.2211.02001

MIKOLOV, Thomas et al. (2013): Efficient Estimation of Word Representations in Vector Space. Online: https://arxiv.org/pdf/1301.3781

PAN, James Jie et al. (2024): Vector Database Management Techniques and Systems. In SIGMOD ’24: Companion of the 2024 International Conference on Management of Data. New York: Association for Computing Machinery, 597–604. Online: https://doi.org/10.1145/3626246.3654691

RAM, Ori et al. (2023): In-Context Retrieval-Augmented Language Models. Online: https://doi.org/10.1162/tacl_a_00605

STIENNON, Nisan et al. (2020): Learning to Summarize From Human Feedback. Online: https://arxiv.org/pdf/2009.01325

TÓTH Bálint Pál (2016): Beszélő számítógépek mély gondolatokkal. Neurális hálózatok. Élet és Tudomány, 71(30), 944–946.

TOUVRON, Hugo et al. (2023): LLaMA: Open and Efficient Foundation Language Models. Online: https://doi.org/10.48550/arXiv.2302.13971

VASWANI, Ashish et al. (2017): Attention Is All You Need. Advances in Neural Information Processing Systems, 5998–6008. Online: https://arxiv.org/pdf/1706.03762.pdf

WEI, Jason et al. (2022): Emergent Abilities of Large Language Models. Online: https://doi.org/10.48550/arxiv.2206.07682

YANG, Min et al. (2021): DOLG: Single-Stage Image Retrieval With Deep Orthogonal Fusion of Local and Global Features. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11772–11781. Online: https://doi.org/10.1109/ICCV48922.2021.01156

YANG Zijian Győző et al. (2023): Jönnek a nagyok! BERT-Large, GPT-2 és GPT-3 nyelvmodellek magyar nyelvre. In XIX. Magyar Számítógépes Nyelvészeti Konferencia, 247–262. Online: https://acta.bibl.u-szeged.hu/78417/1/msznykonf_019_247-262..pdf

ZHANG, Shengyu et al. (2023): Instruction Tuning for Large Language Models: A Survey. Online: https://doi.org/10.48550/arXiv.2308.10792

ZHOU, Chunting et al. (2023): LIMA: Less Is More for Alignment. Online: https://doi.org/10.48550/arXiv.2305.11206