In the world of algorithms and databases AI draws its intelligence. If the database is biased then the AO is equally biased.
As intelligent machines become more prevalent and used in various aspects of our everyday lives, it is fair to reflect upon the systems that drive these machines to decipher as they do. Often there is bias in the data depending on how the machine process has been structured to work within the defined database that the machine is working with. For example if you use past hiring data to enable a AI machine to assist in your hiring process given the exiting data available to it may result in its decision process being skewed towards a particular preference. If a company has been employing all white male workers for the past 10 years and fed as a database of preferred successful applicants then there is a string chance that women may not be selected.
So the guardians of those databases and algorithms have a duty of care to ensure that they develop fair and accurate databases to enable the robots equally develop fair and accurate decisions as required within the process they are being used within. Controlling this process can be very difficult and policing it for fairness even more so. Thus a Company like Amazon had to abandon their AI based hiring system as it was shown to be biased towards men in the technology sector.
There is no getting away from machines being relevant in various processes that we take for granted every day. The idea that these intelligent machines are working with flawed and biased data could create very difficult situations that hinder rather than help ease the process as would be the intention of employing AI.
US lawmakers came up in 2019 with the Algorithmic Accountability Act, that makes the large technology companies responsible for ensuring that their models are working fairly and accurately where regular checks need to be administered to ensure that any discrimination is rooted out. However this is far easier said than done.