Algorithms of Machines and Law:
Risks in Pattern Recognition, Machine Learning and Artificial Intelligence for Justice and Fairness
Copyright (c) 2021 Michael Losavio
This work is licensed under a Creative Commons Attribution 4.0 International License.
Pattern recognition, machine learning and artificial intelligence offer tremendous opportunities for efficient operations, management and governance. They can optimise processes for object, text, graphics, speech and pattern recognition. In doing so the algorithmic processing may be subject to unknown biases that do harm rather than good. We examine how this may happen, what damage may occur and the resulting ethical/legal impact and newly manifest obligations to avoid harm to others from these systems. But what are the risks, given the Human Condition?
How to Cite
Aougab, T. et al. (2020). Letter to American Mathematics Society Notices: Boycott collaboration to police. Online: https://bit.ly/312vLls
Calo, R. (2018). Artificial Intelligence Policy: A Primer and Roadmap. University of Bologna Law Review, 3 (2), 180–218. Online: https://doi.org/10.2139/ssrn.3015350
Chalmers, D. (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Studies, 17 (9–10), 7–65.
Davidson, R. (2019, August 8). Automated Threat Detection and the Future of Policing. US FBI Bulletin.
Deeks, A. S. (2018). Predicting Enemies. Virginia Law Review, 104 (8).
First Report of the Axon Artificial Intelligence and Policing Technology Ethics Board, June 2019.
Franke, K. & Srihari, S. N. (2007, August 29–31). Computational Forensics: Towards Hybrid-Intelligent
Crime Investigation. Proceedings of the Third International Symposium on Information Assurance and Security. Online: https://doi.org/10.1109/IAS.2007.84
Gotterbarn, D., Miller, K. & Rogerson, S. (1997). Software engineering code of ethics. Communications of the ACM, 40 (11), 110–118. Online: https://doi.org/10.1145/265684.265699
Gouvernement de France (2020, June 17). Launch of the Global Partnership on Artificial Intelligence. Online: https://bit.ly/3r6jxTz
Keneally, E. et al. (2012, August 3). The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research. Online: https://doi.org/10.2139/ssrn.2445102
Makker, S. R. (2017). Overcoming “Foggy” Notions of Privacy: How Data Minimization Will Enable Privacy in the Internet of Things. UMKC Law Review, 85 (4), 895–915.
Office of the Inspector General (2019, June 12). Review of Selected Los Angeles Police Department Data-Driver Policing Strategies. Online: https://bit.ly/3cIfEvJ
O’Neil, C. (2016). Weapons of Mass Destruction. Crown Publishing. Organization for Economic Cooperation and Development (2020, June 15).
OCED to host Secretariat of new Global Partnership on Artificial Intelligence. Online: https://bit.ly/3l89BFl
Uberti, D. (2020, June 1). Algorithms Used in Policing Face Policy Review. Artificial Intelligence Daily, Wall Street Journal.
Yampolskiy, R. V. (2012a). Leakproofing the Singularity Artificial Intelligence Confinement Problem. Journal of Consciousness Studies, 19 (1–2), 194–214.
Yampolskiy, R. V. (2012b). Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach. In V. C. Müller (Ed.), Philosophy and Theory of Artificial Intelligence (pp. 389–396). Springer. Online: https://doi.org/10.1007/978-3-642-31674-6_29
Yampolskiy, R. V. & Fox, J. (2013). Safety Engineering for Artificial General Intelligence. Topoi, 32 (2), 217–226. Online: https://doi.org/10.1007/s11245-012-9128-9
Yemini, M. (2018). The New Irony of Free Speech. Columbia Science and Technology Law Review, 20 (1). Online: https://doi.org/10.7916/stlr.v20i1.4769