top of page
  • Writer's pictureNicholas Kluge

Welcome to the Teeny-tiny castle

AIRES at PUCRS is making available to the public the teeny-tiny_castle, a Github repository aimed at providing researchers with tools to work with issues involving AI ethics and safety using Python.

In this repository, you will find notebooks and scripts showing how to create and use tools for addressing certain safety issues in AI (interpretability, sustainability, fairness, robustness). You can also find an introductory course on ML ( - Machine Learning - ) to help you get started if you are unfamiliar with this language. The Introduction to ML part is designed to give interested people knowledge about some of the tools and abstractions that underlie ML.

All requirements and step-by-step instructions are laid for you in the teeny-tiny_castle. In it, you will find a series of tutorials, examples, and tools for working on issues related to:

AI Ethics (e.g., Guidelines, Governance, Regulation, R&D).

Sustainability in the development of granges models (how to measure and quantify carbon footprint of models trained by ML).

Interpretability and Robustness in Computer Vision (e.g., tools for XAI and Adversarial ML).

Interpretability and Robustness in PLN (e.g., LIME for NLP, Playgrounds for prompt engineering, Text mining examples).

Interpretability in classification and prediction with tabular data (how to explain divergent classifications of models developed by ML).

Machine Learning Fairness (how to measure, quantify and remedy algorithmic discrimination).

Systemic Security (e.g., how organizations should coordinate to protect and preserve their information infrastructure).

This repository is a work in progress being built as part of the RAIES initiative (Network for Ethical and Safe Artificial Intelligence), a project supported by FAPERGS (Research Support Foundation of the State of Rio Grande do Sul). If you are interested in collaborating with the project, contact us!

9 views0 comments


bottom of page