How Web3 Can Effectively Address AI Gender Bias
Read Time:3 Minute, 1 Second

How Web3 Can Effectively Address AI Gender Bias

“Web3 presents an ideal framework for individuals to be compensated for their own data,” states Rumi Morales, a founding director of Navigate, a Web3 platform dedicated to creating an AI-driven map utilizing crowdsourced information.

Currently, the competition in the AI landscape among major tech companies remains a captivating topic. Just this past week, Amazon announced its plans to integrate generative AI capabilities into Alexa, its virtual assistant. While this surge of interest in AI is thrilling, it does not eliminate ongoing concerns regarding gender bias within the field.

Addressing this bias in AI can be achieved by ensuring a diverse workforce in the industry. However, the World Economic Forum reports that women comprise only 30% of professionals in AI, with growth in representation lagging behind other tech sectors, which have only seen a 4% increase since 2016.

In a discussion with Morales about her role in the AI industry, she highlighted several strategies to combat the gender bias affecting AI systems.

“As a woman in senior AI leadership, I’ve witnessed the various obstacles women encounter in this sector,” Morales shared. “Gender bias can infiltrate AI systems at every development phase, from the data utilized to train models to the teams building and implementing them.”

She outlined four essential approaches to mitigate bias before AI models are fully developed:

1. Assemble diverse and inclusive AI teams.
2. Ensure transparency regarding how AI systems function and the measures taken to counteract bias.
3. Hold ourselves accountable for the results of our systems and address any unintended outcomes.
4. Educate the public, AI practitioners, and policymakers about the realities of AI bias.

“It is vital to actively incorporate women at every step in the AI development process—from ideation to design to deployment. Cultivating a culture that prioritizes safety and respect within the AI community is crucial,” Morales emphasized.

She noted, “Historically, the AI community has often felt exclusive and intimidating for newcomers. This may have been more justifiable during the early stages of AI development when access was limited. However, as AI tools become increasingly accessible, it is imperative to foster an environment that embraces diverse perspectives.”

For existing AI models, Morales offered insights on how to enhance them: “Monitoring systems in active environments is essential for identifying and rectifying emerging biases. Regularly updating AI models with new data is crucial for ensuring fairness and reducing bias, which can involve retraining models with improved datasets or employing methods such as re-weighting or resampling to ensure better gender representation. Most importantly, we must encourage and involve more women in AI research, welcoming their viewpoints in this rapidly changing landscape.”

Morales also suggested that Web3 could provide a new avenue for addressing these bias-related data issues. She explained that in a Web3 framework, data is not controlled by a single entity, granting users greater authority over how their data is used. Individuals can track their data usage, holding companies accountable, and maintain ownership of their data. They can decide who may use it and under what conditions, prompting companies to seek permission and potentially compensating users for their data contributions.

Morales elaborated on this with her work at Navigate, which is developing an AI-powered mapping platform where contributors earn rewards for providing image data. Users can redeem these rewards for gift cards from numerous global brands, including Airbnb, EA, Under Armour, and Uber Eats. “In contrast to other mapping services like Waze, where the contributions of individuals often go unrewarded and their data is owned by the platform, Navigate ensures that users retain ownership of their data and receive benefits for their contributions.”