[Русский] Google выпускает модель обучения машинному обучению, которая обеспечивает справедливость

Google машинное обучение искусственный интеллект
[Русский] Google выпускает модель обучения машинному обучению, которая обеспечивает справедливость

Posted by Andrew Zaldivar, Developer Advocate, Google AI

A few months ago, we announced our AI Principles, a set of commitments we are upholding to guide our work in artificial intelligence (AI) going forward. Along with our AI Principles, we shared a set of recommended practices to help the larger community design and build responsible AI systems.

В частности, один из наших Принципов ИИ говорит о важности признания того, что алгоритмы и наборы данных ИИ являются продуктом окружающей среды, и поэтому мы должны осознавать любые потенциальные несправедливые результаты, создаваемые системой ИИ и риск, который он представляет для разных культур и обществ.Рекомендуемая практика здесь для практиков состоит в том, чтобы понять ограничения их алгоритма и наборов данных, но это проблема, которая далека от решения.

To help practitioners take on the challenge of building fairer and more inclusive AI systems, we developed a short, self-study training module on fairness in machine learning. This new module is part of our Machine Learning Crash Course, который мы настоятельно рекомендуем пройти в первую очередь, если только вы не очень хорошо знакомы с машинным обучением, в котором случай, когда вы можете сразу перейти кthe Fairness module.

The Fairness module features a hands-on technical exercise. This exercise demonstrates how you can use tools and techniques that may already exist in your development stack (such as Facets Dive, Seaborn, pandas, scikit-learn and TensorFlow Estimators to name a few) to explore and discover ways to make your machine learning system fairer and more inclusive. We created our exercise in a Colaboratory notebook, which you are more than welcome to use, modify and distribute for your own purposes.

From exploring datasets to analyzing model performance, it's really easy to forget to make time for responsible reflection when building an AI system. So rather than having you run every code cell in sequential order without pause, we added what we call FairAware tasks throughout the exercise. FairAware tasks help you zoom in and out of the problem space. That way, you can remind yourself of the big picture: finding the undesirable biases that could disproportionately affect model performance across groups. We hope a process like FairAware will become part of your workflow, helping you find opportunities for inclusion.

FairAware task guiding practitioner to compare performances across gender.

The Fairness module was created to provide you with enough of an understanding to get started in addressing fairness and inclusion in AI. Keep an eye on this space for future work as this is only the beginning.

If you wish to learn more from our other examples, check out the Fairness section of our Responsible AI Practices guide. There, you will find a full set of Google recommendations and resources. From our latest research proposal on reporting model performance with fairness and inclusion considerations, to our recently launched diagnostic tool that lets anyone investigate trained models for fairness, our resource guide highlights many areas of research and development in fairness.

Let us know what your thoughts are on our Fairness module. If you have any specific comments on the notebook exercise itself, then feel free to leave a comment on our GitHub repo.


On behalf of many contributors and supporters,

Эндрю Залдивар — советник разработчиков, Google AI