Type Here to Get Search Results !

How can we create better systems than AI contaminated by human organisms? Know it Here


AI
Gerd Altmann from Pixabay 


The Artificial Intelligence (AI) system is now controlling everything from moderating social media to decision making to policy and governance. Today, we have companies that are building systems to use AI that can predict where COVID-19 will attack next and can make decisions about healthcare. But in creating these systems, and choosing the data that informs their decisions, there is a great risk of human bias to try and amplify the mistakes people are making.

To understand how one can build confidence in AI systems, we caught up with IBM Research India Director, Gargi Dasgupta, and Distinguished Engineer, Samip Mehta, as well as Dr. Vivienne Ming, AI expert and founder of Sokos Labs, California - Based AI incubator to find some answers.

How does bias erode in AI systems in the first place?

Dr. Ming explained that bias becomes a problem when AI is already being trained for biased data. Dr. Ming is a neuroscientist and founder of Sokos Labs, an incubator that works to find solutions to dirty human problems through the application of AI. "As an academic, I've had the opportunity to do a lot of collaborative work with Google and Amazon and others," he explained.

"If you want to build systems that can solve problems, then it's important that you first see where the problem exists. A large amount of data and a bad understanding of the problem is really about creating issues. Guaranteed. "

Dasgupta of IBM said, "Special equipment and technology is needed to ensure that we do not have biases. We need to make sure and take extra precautions that we remove bias so that our biases are not naturally transmitted across models. "

Since machine learning is built on previous data, it is very easy for algorithms to find a correlation - and read it as a work-cause. Noise and random fluctuations can be interpreted as the main concepts by the model. But then, when new data is entered, and it does not have the same fluctuations, the model will think that it does not match the requirements.

“How can we create AI for non-biased recruitment against women? Amazon wanted me to build exactly that and I told them that the way they were doing wouldn't work, "Dr. Ming explained." They were just training AI on their large-scale recruitment history. They have a vast dataset of previous employees. But I don't think it's surprising to any of us that almost all of its history is biased in favor of men for too many reasons. "

"It is not that they are bad people; They are not bad people, but AI is not magic. If humans cannot detect sexism or casteism or casteism then AI will not do it for us. "

What can be done to remove bias and build confidence in AI systems?

Dr. Ming is more in favor of auditing than regulating AI systems. "I am not a big advocate of regulation. Companies, big and small, need to embrace auditing. Their AI, algorithms, and data auditing are the same as they do for the financial industry, ”she said.

"If we want the AI ​​system to be fair in hiring, then we need to be able to see what causes someone to be a great employee, not what the 'correlation' is with previous great employees," Dr. Ming explained.

"Is it easy to correlate - elite schools, some genders, some races - at least in some parts of the world. They are already part of the recruitment process. When you apply the causal analysis, going to an elite school is about this. Not indicative of why people are good at their jobs. A significant number of people who do not attend elite schools are as good at their jobs as one went to. We typically have a data set of about 122 million people Are found, there were ten and in some cases about 100 times equally qualified people who attend elite universities. "

To solve this problem, one must first understand whether and how an AI model is biased, and second, to work out algorithms to remove biases.

According to Mehta, "There are two parts to the story - one is to understand if an AI model is biased. If so, the next step is to provide an algorithm to remove such biases."

The IBM research team released a range of tools aimed at removing and reducing biases in AI. IBM's AI Fairness 360 Toolkit is one such tool. It is an open-source matrix to examine unwanted bias in datasets and machine learning models, which uses approximately 70 different methods to calculate bias in AI.

Dasgupta says that there have been many cases where there was a bias in a system and the IBM team was able to predict it. "When we predict bias, it is in the hands of customers how they integrate it into part of their corrective process."

The IBM research team has also developed the AI ​​Expandability 360 toolkit, which uses algorithms which is a toolkit of algorithms that support the explainability of machine learning models. This allows customers to understand and further improve and iterate upon their systems, Dasgupta explained.

Part of this is a system that IBM calls a factsheet - a lot of nutrition labels or Apple privacy labels that Apple recently introduced.

Fact sheets include ‘Why was this AI created?’, ‘How was it trained?’, ‘What are the characteristics of training data’, ‘Is the model appropriate?’, ‘Can the model be interpreted? etc. This standardization also helps in comparing two AIs against each other.

IBM recently launched new capabilities for its AI system Watson. Mehta said that IBM's AI Fairness 360 toolkit and Watson Openskale have been deployed at many locations to help customers with their decisions.

Is the Mi 11 Ultra the best phone you can buy for Rs. 60,000? We discussed this on Orbital, the Gadgets 360 podcast. Orbital is available on Apple Podcast, Google Podcast, Spotify, Amazon Music, and wherever you find your podcast.

Top ad res

inarticle code

ad res