Gender Gap in the Tech World and the need for better Algorithm

With technology improving each day and our reliance on technology increasing, we need to address the gap in the data being processed. With our human biases mirrored in the algorithms, transparency and ensuring unbiased model is of utmost importance especially at the policy level, says PARVATHI SAJIV

—–

THE Gender gap in the tech world is glaringly evident. In August 2020, Ms. Brougher filed suit against Pinterest, the image sharing and social media app in which she alleged gender discrimination, retaliation and wrongful termination. Three months later, the parties settled her lawsuit for $22.5 million. The settlement is one of the largest publicly-announced, single-plaintiff gender discrimination settlements ever.

The fact that a company is as big as Pinterest engages in gender discrimination brings to question how much equal gender representation occurs in the tech world especially in the datafied world.

Every action of ours in the digital realm is recorded. This is stored as data, combined and analysed into profiles, and we place our trust on the algorithm to keep us satisfied. We supply data, and data is collected and generated from us.

Given the importance of data in the 21st century, countries worldwide are formulating privacy policies that protect their users from the data being misused.

While privacy is a fundamental right of ours, it is important to understand data and privacy through a gendered lens. This shouldn’t be restricted to cisgender and include people of different sexual orientations, gender and sexual identities.

ALGORITHM BIAS

When the data is generated, it carries the inherent biases that we humans possess, including bias towards particular minorities which consists of women. This leads to firms developing technologies that carry bias. Earlier, Facebook was sued for withholding financial services advertising from older and female users. There are also facial recognition technologies that reinforce inequality where they misidentify women and several instances where hiring technologies screen women unfairly.

This gender gap exists because the data being collected globally continues to collect data about men, from economic data to urban planning and medicine related. Caroline Criado Perez reveals much of the gender data gap through her book, “Invisible Women: Exposing Data Bias in a World Designed for Men.”

Therefore, it is important to address the gender inequality that persists in the real world, which is mirrored in the digital world. Ruchika Chaudhary, a senior research fellow at the Initiative for What Works to Advance Women and Girls in the Economic and co-author of its study on gender data gaps said, that gender-inclusive data would lead to ineffective policies on employment, and other sectors. This also includes the economy of unpaid care work.

When the data is unavailable for those from a particular race, ethnicity, religion, location, or disability, we enable creating policies that do not cater to equality. 

The policies need to be intersectional for, social and economic norms affect hiring practices, working conditions, and social security.

THE PROBLEMS OF ALGORITHM BIAS

When a Muslim man applies for a loan, he may be denied a loan despite his economic background. People from a particular neighbourhood may be arrested wrongly for a crime, and women may find their resume being ignored for a job they are more qualified for than her fellow male applicants. These are scenarios that play out on an everyday basis.

The AI tends to reflect the bias but also amplifies them.

The increasing use of technology for making decisions such as this needs to account for the present algorithmic bias. This is why data needs to be collected and separated by gender, ethnicity, colour and other factors. The one size fits all formula, pushes the minorities further back and with the digital era, even further. The AI tends to reflect the bias but also amplifies them.

As reported by MyITU, a study found that an image-recognition software trained by a deliberately-biased set of photographs ended up making stronger sexist associations. We need to understand what works for a particular section of people and what doesn’t.

While the firms create technology, biases may creep in at stages such as tuning the model parameters, interpretation, and framing. These unintentional or intentional decisions lead to biased algorithms. By not including different sections of people, these models can create a bias against groups to be surveilled more or for those that are over-represented in the database.

POLICY MAKING AND ALGORITHM BIAS

One of the biggest challenges in technology governance is framing, framing, representation, and tractability as we formulate policies. Susan Etlinger from the Centre for International Governance Innovation, says, “First, the way we frame the issue of algorithmic bias is critical. If we view it as a technology problem, we tend to oversimplify potential solutions and exacerbate inequality while creating a false sense of security. Yet, if we frame algorithmic bias as the inevitable outcome of social inequality, it seems utterly intractable.”

The second challenge is representation. People who are the most affected by governance should be present in the room during decision-making.

This issue has been debated when the Personal Data Protection Bill, 2019 came out for the first time. The Bill proposes setting up a Data Protection Authority ( DPA) that shall take the responsibility of protecting and regulating the use of citizens’ personal data.

As Sara Suárez-Gonzalo says, “The second-wave feminist claim ‘the personal is political’ pushes for critically examining the traditional opposition between public and private spheres.” Given the value of liberalism that underpins the framework of PDP, ensuring privacy means guarding the individual against the injury inflicted by invasions upon their personal affairs. She says that “It is critical to reducing the factors that give a few corporations the power to undermine citizens’ ability to act autonomously for the protection of their personal data.”

With the power to remove any member of the DPA lying in the Central Government’s hands, the DPA will lack the independence which institutions such as SEBI, TRAI, and CCI enjoy. 

The 2018 draft suggested that the appointments to the DPA will be made by a diverse committee but is limited to the members of the executive.  With the power to remove any member of the DPA lying in the Central Government’s hands, the DPA will lack the independence which institutions such as SEBI, TRAI, and CCI enjoy. In terms of representation, Kazim Rizvi, founder of The Dialogue, a technology public policy think-tank believes gender diversity is a must for the DPA to protect sensitive health and other women’s data.

Representatives from other groups are also required, who don’t face the threat of being removed by the Central Government.  With the powers in the hands of the Central Government and the potential lack of representation of all communities who will be most affected, the DPA runs the risk of taking decisions against the Central Government.

Algorithms and data should also be externally audited and allowed for public scrutiny. Understanding cognitive biases is an important part of education.

The representation of minorities shouldn’t be seen as a token representation but rather actively fought for. As the Observer Research Foundation remarked, ” The Srikrishna Committee-proposed draft Personal Data Protection Bill, 2018 provides rights to access and confirm personal data.

However, it does not require computer model decisions to be explainable.” The SPDI rules do not cover algorithmic bias either. With the fast pace that technology progresses, the policy needs to factor in these changes. ORF further recommended the need for workplaces to be more diverse and detect blind spots. The presence of minorities is important to address in these datasets. But another important aspect that needs to be considered is the protection of this information.

In the paper titled “Feminist AI: Can We Expect Our AI Systems to Become Feminist?” by Galit Wellner and Tiran Rothman, they propose solutions to overcome this algorithm bias. One of them is to ensure algorithmic transparency. The underlying assumption is that if we know how the algorithm concluded, we can detect the bias. Another aspect of transparency relates to how data was collected and annotated.

The other solution involved the human element. Human involvement is necessary because deep learning algorithms cannot acquire abstract ideas like gender or race. Using the algorithm to identify the pattern and the humans to understand its meaning enables to reduce the gender bias in the tech world presently.

Keeping these solutions in mind, Europe prohibits solely automated decisions as there could be a significant impact on the persons concerned. They have established a right to humans in the loop and the right to explanation in all their cases.

India also needs a policy that functions to keep a transparent record of the algorithmic models and protect its data. The Personal Data Protection Bill, 2019 needs to address the algorithm bias and protection of its citizens including the minorities.

(Parvathi Sajiv is a student of the Symbiosis Centre of Media and Communication, Pune, and is an intern with The Leaflet. The views are personal.)