Monday, April 24, 2017

Systemic / Structural Bias / Prejudice and Privilege Embedded in Software and now Artificial Intelligence

Good morning.
"And the beat goes on...................."

When we talk about the concepts of racism and privilege being structural and systemic, we mean that the biases and prejudices, and perks and advantages are embedded in the structures and systems by which we operate and the environments in which we live.  Those systemic realities transcend individuals, situations and time, and are often invisible in the way they insinuate into our cultures and influence who we are, our attitudes and beliefs, and our decision making processes.

Everyone is to some degree prisoner to their heritages, histories, cultures and environments, and biased and prejudiced accordingly.  Many, if not most, people are largely unaware of the extent to which they have been programmed, and how those programs perpetuate social and political climates and how we function daily.  All best intentions aside, that makes it axiomatically more difficult to address the inequity challenges facing the world.

I have been interested of late in the dangers in the growth of Artificial Intelligence (AI) and how machine learning ultimately threatens the existence of the human species as machines elevate to the superior position in the relationship with humans - greater information and knowledge, and its own learned kind of wisdom based on precepts and assumptions it (not we) may make; leading to the day when the machines tire of the human species and see no logical reason for its continued existence.  Science fiction? Maybe, but there are lots of minds out there who share that fear.   Moreover,  in the short term machines - being programmed by human beings - are likely to exhibit the same biases and prejudices as those who programmed them - thus perpetuating the inequities of modern life and the myriad problems stemming therefrom.

A recent article in the Guardian reported on troubling research that AI Programs exhibit racial and gender biases and prejudices:

"The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.
As machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.
Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”
Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases."

Developments in AI are happening at an accelerated rate.  This isn't science fiction way in the future stuff - this is happening now, and likely to happen ever faster.

This made me wonder to what extent (and to what damage) software that has guided our computational efforts for the past three or more decades has already been embedded with bias and prejudice?  How has privilege already been incorporated into all of the "smart" devices that now manage and empower our lives?  How many online video games that kids play millions of times a month, unintentionally reflect the human biases and prejudices and privileged positions of those who created the games - manifested in language, preferences, rewards and otherwise?  And what is the net effect of that?  How many software programs created to power the search engines we use, reflect that same unintentional bias?  How many myriad ways we use computer programs and software have further increased and deepened the structural and systemic racial, gender, sexual orientation, religious and other prejudices with which we still grabble.

The paper reported on in the Guardian above pointed out:

"The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.
And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.
The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.
These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices."

So if the problem isn't just the algorithms that will facilitate the learning by AI, but are already working their negative impacts in software and programs we use and have been using, we (in our own field of the nonprofit arts) need to try to figure out which programs, which software specifically - and in which situations - have the designers, code writers and creators imbued their work (assumedly unintentionally) with their prejudices and biases, and to what extent has that made the systemic racism and privilege more entrenched.  And what damage has already been done, and what can we do to change the reality.  And that is unquestionably a Herculean challenge given that little of all the software is specific only to us.  But where it is at least semi-specific to us - such as perhaps in some grant managing software - can we identify where the coding or algorithms may be reflective of bias and prejudice?  Is that even possible?

With some exceptions, most programmers, coders, program / software creators are probably white men of a certain age.  To the extent we use what they have created in untold numbers of ways in our business and personal lives, continued exposure to their mind set biases, eventually has an impact on our thinking and our own experiences.  The more time we spend absorbing the thinking of others - particularly without any avenue of exchange about that thinking - the more it is likely to color our own thinking.  And most of that coloring goes on unnoticed.

A good illustration came up in another article this week - this one in the Nation - about Thought Leaders, in which the author quoted a passage from Barack Obama's (2006) book The Audacity of Hope:

 "Increasingly I found myself spending time with people of means—law firm partners and investment bankers, hedge fund managers and venture capitalists. As a rule, they were smart, interesting people, knowledgeable about public policy, liberal in their politics, expecting nothing more than a hearing of their opinions in exchange for their checks. But they reflected, almost uniformly, the perspectives of their class: the top 1 percent or so of the income scale that can afford to write a $2,000 check to a political candidate. They believed in the free market and an educational meritocracy; they found it hard to imagine that there might be any social ill that could not be cured by a high SAT score. They had no patience with protectionism, found unions troublesome, and were not particularly sympathetic to those whose lives were upended by the movements of global capital. Most were adamantly prochoice and antigun and were vaguely suspicious of deep religious sentiment.
[A]s a consequence of my fund-raising I became more like the wealthy donors I met, in the very particular sense that I spent more and more of my time above the fray, outside the world of immediate hunger, disappointment, fear, irrationality, and frequent hardship of the other 99 percent of the population—that is, the people that I’d entered public life to serve.”

We have all been spending an inordinate amount of time with those who have written the software and programs we have all been using in our computers for decades - including those people's biases and prejudices from their own specific upbringing and experiences.  They likely didn't know their baggage was included in their work, and we likely didn't know it either.  But in all probability that is the reality.  This is the ugly, insidious side of systemic, structural prejudice and privilege.

Fighting against this will be a lot harder than anyone could have possibly imagined, but the battles absolutely must be on all these deep levels - and especially for the future.  A biased, prejudiced AI landscape is beyond frightening.  It may pose a threat we are simply incapable of countering.

Have a good week.

Don't Quit.

No comments:

Post a Comment