Organisational Artificial Intelligence (AI) systems – how to build ethics, accountability, and transparency

Bill Simpson Young is Chief Executive of Gradient Institute. Co-founded in 2019, with other AI researchers and practitioners, Gradient Institute works to tackle the issue of poorly designed AI systems, used by organisations, that can amplify disadvantage for their customers.

Organisational Artificial Intelligence (AI) systems – how to build ethics, accountability, and transparency

Bill has a background as a software engineer and has spent his career in the research and development of new software technologies including machine learning (the main form of current AI technology). Through his work at Gradient Institute, he leads the team to build ethics, accountability and transparency into AI systems: developing new algorithms, training organisations that operate AI systems and providing technical guidance for AI policy development.

Bill, thanks for joining us on the Blog Spot. Can you start by sharing your insights into predictions made by AI systems?

AI systems are often used to make predictions, for example, whether a customer will be able to pay a bill or repay a loan or whether a transaction is a fraudulent one. The systems then often make decisions based on these predictions and these are sometimes life-changing decisions.

Typically an AI system is trained on data such that, on average, its predictions are accurate. However, there will always be some predictions that are less accurate than others. Where this is particularly problematic is where the predictions that are less accurate are those related to people already experiencing disadvantage and so the resulting decisions may be unfairly detrimental to them.    

Also, often the data that people are using to train the AI systems is not representative of the actual population that they're trying to learn about. People who are disadvantaged are often overrepresented or underrepresented in data. For example, people experiencing disadvantage are likely overrepresented in crime data, while they may also be underrepresented in higher education data. When you use traditional data to make a prediction or an intervention, if you have less data representing a particular cohort, your predictions are likely to be less accurate for those individuals. Read more in the Australian Human Rights Commission's Human Rights and Technology Final Report

What about algorithmic biases, and how are organisations perpetuating these?

Organisations need to understand that they can perpetuate algorithmic biases, particularly when it comes to predicting vulnerability. Factors like missing people who don’t self-identify their vulnerability, or omitting hidden vulnerabilities, means that stakeholder involvement is critical alongside the use of AI to ensure that systems are designed, tested and carefully monitored to ensure that they aren’t causing more harm. To understand more about what can lead to algorithmic bias and the types of actions that can be taken to reduce or avoid it, see the Australian Human Rights Commission’s report on algorithmic bias (which Gradient Institute co-wrote)

How can business get the best use of AI technologies?

In order to get the best use of these new technologies, organisations that use them need to earn the trust of customers and the general public. Earning trust can be helped by greater transparency of the system’s purpose, operation and performance, by ensuring that an AI decision can be contested and by ensuring that our outputs are meeting the needs of the customer, not just our organisational needs. This customer centric approach, alongside the human touch of staff/stakeholder engagement, can assist in meeting the needs of our clients and avoiding any gaps in the current technologies.

Is it possible to use technology in a way that aligns profit and social good?

Absolutely! For example, “next-best-action” systems are often used to optimise business metrics and individual customer outcomes. These systems operate at scale to make predictions and, based on those predictions, to suggest actions. Rather than using a model that has been designed to maximize profitability, you can configure the next-best-action system to still effectively get maximum profitability, but actually favour people experiencing disadvantage in a really subtle way. A little tweak in a system can make a significant difference to your customer base in a way that doesn't significantly affect profitability. Read more about this ethical approach in the next-best-action systems Ambiata Blog.

Bill has worked in technology research and development in global tech organisations (Canon and Unisys), government agencies (CSIRO), non-profits and start-ups and also lectured in IT innovation at University of Sydney.

 

Share this Post: