top of page
  • Writer's pictureNicholas Kluge

Artificial Intelligence Ethics and Safety: practical tools for creating “good” models

It is not an uncommon situation when an individual, or a group of individuals, finds themselves in front of a decision-maker responsible for making some form of judgment based on a set of observable facts and characteristics (e.g., a judge in a civil court, an appraiser in a job interview, or a bank manager responsible for authorizing, or not, a loan). However, what is new is the use of statistical inference models to automate such processes (e.g., models created by machine learning).

As autonomous systems affect more and more people and society, understanding the potential risks related to such systems (and how to mitigate them) must be deepened. But what kind of risks are we talking about?

And why is this so?

A simplistic answer would be, “The answer is in the data. The data we use is skewed.” However, a more truthful answer would be, “It's a complex problem.”

There is much that we still do not understand about such systems. At the 2017 NIPS conference (Conference on Neural Information Processing Systems), Ali Rahimi raised an important point about the current state of the Machine Learning research field: “machine learning has become alchemy.”

In Ali Rahimi's words:

"Alchemy is not bad. There is a place for alchemy. Alchemy “worked.” Alchemists invented metallurgy, ways to dye textiles, our modern glass making processes and medications. Then again alchemists also believed that could cure diseases with leeches and transmute base metals into gold. For the physics and chemistry of the 1700s to usher in the sea change in our understanding of the universe that we now experience, scientists had to dismantle 2,000 years’ worth of alchemical theories. If you're building photo-sharing systems alchemy is okay. But we're beyond that now. We're building systems that govern healthcare and mediate our civic dialogue. We influence elections. I would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge and not on alchemy."

In other words, machine learning still needs more theoretical study. However, it is not clear that the industry will slow down its practical progress and development for the sake of caution and formalization of the theories that underlie the creation of its products. And that creates problems.

Thus, we believe that it is necessary to create and formalize a new agent to operate within organizations and companies focused on developing technologies and solutions that use these types of systems.

As there are still few proposals on how we should implement ethical principles and normative guidelines in the practice of AI system development, we at AIRES PUCRS have developed a small guide that seeks to bridge this gap between discourse and praxis. Between abstract principles and technical implementation. In this work, we seek to introduce the reader to the subject of AI Ethics and Security. At the same time, we present a series of tools to help developers of intelligent systems develop "good" models. This work is a developing guide published in English and Portuguese. Contributions and suggestions are welcome!

Our guide, "Ethics and Safety of Artificial Intelligence: practical tools for creating "good" models" can be accessed at these links: arXiv and ResearchGate.

For more information, contact Nicholas Kluge (author of the paper).

31 views0 comments


bottom of page