Algorithms of Machines and Law:

Risks in Pattern Recognition, Machine Learning and Artificial Intelligence for Justice and Fairness

doi: 10.53116/pgaflr.2021.2.3

Abstract

Pattern recognition, machine learning and artificial intelligence offer tremendous opportunities for efficient operations, management and governance. They can optimise processes for object, text, graphics, speech and pattern recognition. In doing so the algorithmic processing may be subject to unknown biases that do harm rather than good. We examine how this may happen, what damage may occur and the resulting ethical/legal impact and newly manifest obligations to avoid harm to others from these systems. But what are the risks, given the Human Condition?

Keywords:

pattern recognition artificial intelligence governance management justice ethics human condition

How to Cite

Losavio, M. (2021). Algorithms of Machines and Law:: Risks in Pattern Recognition, Machine Learning and Artificial Intelligence for Justice and Fairness. Public Governance, Administration and Finances Law Review, 6(2), 21–34. https://doi.org/10.53116/pgaflr.2021.2.3

References

Aougab, T. et al. (2020). Letter to American Mathematics Society Notices: Boycott collaboration to police. Online: https://bit.ly/312vLls

Calo, R. (2018). Artificial Intelligence Policy: A Primer and Roadmap. University of Bologna Law Review, 3 (2), 180–218. Online: https://doi.org/10.2139/ssrn.3015350

Chalmers, D. (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Studies, 17 (9–10), 7–65.

Davidson, R. (2019, August 8). Automated Threat Detection and the Future of Policing. US FBI Bulletin.

Deeks, A. S. (2018). Predicting Enemies. Virginia Law Review, 104 (8).

First Report of the Axon Artificial Intelligence and Policing Technology Ethics Board, June 2019.

Franke, K. & Srihari, S. N. (2007, August 29–31). Computational Forensics: Towards Hybrid-Intelligent

Crime Investigation. Proceedings of the Third International Symposium on Information Assurance and Security. Online: https://doi.org/10.1109/IAS.2007.84

Gotterbarn, D., Miller, K. & Rogerson, S. (1997). Software engineering code of ethics. Communications of the ACM, 40 (11), 110–118. Online: https://doi.org/10.1145/265684.265699

Gouvernement de France (2020, June 17). Launch of the Global Partnership on Artificial Intelligence. Online: https://bit.ly/3r6jxTz

Keneally, E. et al. (2012, August 3). The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research. Online: https://doi.org/10.2139/ssrn.2445102

Makker, S. R. (2017). Overcoming “Foggy” Notions of Privacy: How Data Minimization Will Enable Privacy in the Internet of Things. UMKC Law Review, 85 (4), 895–915.

Office of the Inspector General (2019, June 12). Review of Selected Los Angeles Police Department Data-Driver Policing Strategies. Online: https://bit.ly/3cIfEvJ

O’Neil, C. (2016). Weapons of Mass Destruction. Crown Publishing. Organization for Economic Cooperation and Development (2020, June 15).

OCED to host Secretariat of new Global Partnership on Artificial Intelligence. Online: https://bit.ly/3l89BFl

Uberti, D. (2020, June 1). Algorithms Used in Policing Face Policy Review. Artificial Intelligence Daily, Wall Street Journal.

Yampolskiy, R. V. (2012a). Leakproofing the Singularity Artificial Intelligence Confinement Problem. Journal of Consciousness Studies, 19 (1–2), 194–214.

Yampolskiy, R. V. (2012b). Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach. In V. C. Müller (Ed.), Philosophy and Theory of Artificial Intelligence (pp. 389–396). Springer. Online: https://doi.org/10.1007/978-3-642-31674-6_29

Yampolskiy, R. V. & Fox, J. (2013). Safety Engineering for Artificial General Intelligence. Topoi, 32 (2), 217–226. Online: https://doi.org/10.1007/s11245-012-9128-9

Yemini, M. (2018). The New Irony of Free Speech. Columbia Science and Technology Law Review, 20 (1). Online: https://doi.org/10.7916/stlr.v20i1.4769

Downloads

Download data is not yet available.