The Parallels of AI Technology and Lack of Representation

The Parallels of AI Technology and Lack of Representation

The world obtusely grows more fond of the next smart gadget presented to us as technology soars. However, society is collectively falling prey to the “behind the scenes” of programming said technology that lies within these very gadgets. I want to introduce diversity, equity, and inclusion (DEI) here. This initiative calls for individuals and those identifying as members of historically marginalized communities to have a seat at the table in a workplace that values fair treatment. 

The lack of diversity, equity, and inclusion in the realm of artificial intelligence (AI) poses a serious threat to how large language models (LLM) process inputted training data. These lapses in AI can silently perpetuate systemic racism, sexism, and harm to more underrepresented communities throughout applications.

One of the core problems commences with the data sets that are used to train AI operatives. The AI models can only learn based on the inputted data, so if the input data is fixed, the output data will be biased as a result. ,,Kortx, a Detroit-based data company, states, “Engineering must iterate on the AI’s input and give more data sets to build better and accurate systems“. Teams within companies must be held responsible for creating and training models to be equally diverse across the board. An absence of more diverse information triggers biased and/or erroneous results.

Now, let us add more to the equation: the statistics. Most artificial intelligence specialists are white (about 67%), with about 11% being Hispanic/Latino, 10% identify as Black/African American, roughly 5% are Asian, and 0.4% identify as American Indian/Alaska Native. According to gender, almost 91% of men and nine percent of women are AI specialists. These numbers suggest a serious disconnect between the people who are designing and programming this technology and the groups that are minimally represented in the workplace based on race and gender as well.

,,NPR wrote, “AI has the potential to exacerbate discrimination in things like police surveillance against Black and brown people in financial decision-making and housing opportunities.” In recent years, we as a society have begun to notice how biases can reveal themselves in algorithms, noting that systems are taught to make decisions based on their initial training data. Even when components like race, sexual orientation, and gender are removed, the AI’s decisions will still be based on the data they were trained with [which includes potentially biased human decisions or will look back on historical/social inequities].

Now that everyone is caught up on the ins and outs of what I believe AI and technology are lacking, let me highlight what changes need to be made. Going forward, the concept of outsourcing and delegating unfair wages to international workers should not be considered a solution. Doing so fosters laziness and a lukewarm determination to make up for AI’s flaws, despite humans programming the LLMs. Furthermore, reactive intervention methods sound friendly but usually play out by innocent marginalized people ending up in the prison system; or other affected individuals get disregarded for job and career opportunities before someone has realized that there was AI bias and the algorithm is corrected. 

Remember that the individuals who are sucked into this cycle can sometimes be affected for life if the consequence [prosecution or incarceration] is too severe and cannot be taken back. To conclude, the responsibility delegation unfairly places journalists and minorities in a position to become whistleblowers whether or not that is their intention.

As a society, I think this problem can be resolved through a simple company requirement across all channels: requiring that large language models equally produce false positives and false negatives. Another idea is to crack the door wide open for a conversation that will undoubtedly make some individuals uncomfortable. That conversation must involve interrogating current data sets and including a seat for DEI experts to sort through and catch red flags within the system. DEI markers can be caught before the program even begins, saving a lot of those issues from transpiring. Lastly, I believe hiring diverse and inclusive teams is most imperative, in addition to supporting and uplifting those DEI teams.

Please note that change is discomforting…I can vouch for that statement. Yet, that is no excuse for our technological world to continue down the path it is on. Those who identify as minorities and members of underrepresented communities deserve to use AI and technology that benefits us all in the same manner that it does for our majority counterparts. As a Black woman who wholeheartedly feels and experiences the effects of these inequities, I think it is time to hold the superiors accountable. Otherwise, I will simply charge myself with by rounding the world up for a “serious talk”.