Algorithms of Machines and Law:

Risks in Pattern Recognition, Machine Learning and Artificial Intelligence for Justice and Fairness

doi: 10.53116/pgaflr.2021.2.3


Pattern recognition, machine learning and artificial intelligence offer tremendous opportunities for efficient operations, management and governance. They can optimise processes for object, text, graphics, speech and pattern recognition. In doing so the algorithmic processing may be subject to unknown biases that do harm rather than good. We examine how this may happen, what damage may occur and the resulting ethical/legal impact and newly manifest obligations to avoid harm to others from these systems. But what are the risks, given the Human Condition?


pattern recognition artificial intelligence governance management justice ethics human condition

How to Cite

Losavio, M. (2021). Algorithms of Machines and Law:: Risks in Pattern Recognition, Machine Learning and Artificial Intelligence for Justice and Fairness. Public Governance, Administration and Finances Law Review, 6(2), 21–34.


Aougab, T. et al. (2020). Letter to American Mathematics Society Notices: Boycott collaboration to police. Online:

Calo, R. (2018). Artificial Intelligence Policy: A Primer and Roadmap. University of Bologna Law Review, 3 (2), 180–218. Online:

Chalmers, D. (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Studies, 17 (9–10), 7–65.

Davidson, R. (2019, August 8). Automated Threat Detection and the Future of Policing. US FBI Bulletin.

Deeks, A. S. (2018). Predicting Enemies. Virginia Law Review, 104 (8).

First Report of the Axon Artificial Intelligence and Policing Technology Ethics Board, June 2019.

Franke, K. & Srihari, S. N. (2007, August 29–31). Computational Forensics: Towards Hybrid-Intelligent

Crime Investigation. Proceedings of the Third International Symposium on Information Assurance and Security. Online:

Gotterbarn, D., Miller, K. & Rogerson, S. (1997). Software engineering code of ethics. Communications of the ACM, 40 (11), 110–118. Online:

Gouvernement de France (2020, June 17). Launch of the Global Partnership on Artificial Intelligence. Online:

Keneally, E. et al. (2012, August 3). The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research. Online:

Makker, S. R. (2017). Overcoming “Foggy” Notions of Privacy: How Data Minimization Will Enable Privacy in the Internet of Things. UMKC Law Review, 85 (4), 895–915.

Office of the Inspector General (2019, June 12). Review of Selected Los Angeles Police Department Data-Driver Policing Strategies. Online:

O’Neil, C. (2016). Weapons of Mass Destruction. Crown Publishing. Organization for Economic Cooperation and Development (2020, June 15).

OCED to host Secretariat of new Global Partnership on Artificial Intelligence. Online:

Uberti, D. (2020, June 1). Algorithms Used in Policing Face Policy Review. Artificial Intelligence Daily, Wall Street Journal.

Yampolskiy, R. V. (2012a). Leakproofing the Singularity Artificial Intelligence Confinement Problem. Journal of Consciousness Studies, 19 (1–2), 194–214.

Yampolskiy, R. V. (2012b). Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach. In V. C. Müller (Ed.), Philosophy and Theory of Artificial Intelligence (pp. 389–396). Springer. Online:

Yampolskiy, R. V. & Fox, J. (2013). Safety Engineering for Artificial General Intelligence. Topoi, 32 (2), 217–226. Online:

Yemini, M. (2018). The New Irony of Free Speech. Columbia Science and Technology Law Review, 20 (1). Online:


Download data is not yet available.