Table of Contents

What is Slow AI?

Slow AI is about the responsible creation of data-driven systems through responsible, methodical, and well-documented processes. It is about prioritizing the voices of people impacted by AI systems and about building systems that promote a more fair and inclusive society.

It requires understanding the pitfalls and potential harms to communities by building diverse teams and seeking out people with lived experiences to lead the development of solutions.

A succinct summary of these potential harms can be found in a statement on the “AI Pause” letter by the authors of the Stochastic Parrots paper.

Timnit Gebru, former Google AI Ethics researcher stated in a press release announcing the founding of the Distributed AI Research Institute that “AI needs to be brought back down to earth. It has been elevated to a superhuman level that leads us to believe it is both inevitable and beyond our control. When AI research, development and deployment is rooted in people and communities from the start, we can get in front of these harms and create a future that values equity and humanity.”

Suresh Venkatasubramanian, former White House AI policy advisor to the Biden Administration from 2021-2022 (where he helped develop The Blueprint for an AI Bill of Rights) and professor of computer science at Brown University echoed this sentiment in an interview with VentureBeat, “We should do something about [what’s already here], rather than worrying about a hypothetical that might be coming that hasn’t done anything yet. Focus on the harms that are already seen with AI, then worry about the potential takeover of the universe by generative AI.”

So let’s be clear, the collapse of civilizations at the hands of AI is not inevitable, but there are real harms that need to be addressed.

The real and present dangers of fast AI

I would like to present a grounded view, with real examples of the dangers of developing AI systems too quickly.

Real World Examples

Let’s look at some examples of AI deployments that demonstrate the dangers and inequities perpetuated by AI systems today:

  • The 2014 ProPublica Machine Bias article detailed the racial biases that existed in a 3rd party software that was used by judges, probation, and parole officers across the US. These biases were largely the result of biased training data and a discriminatory survey used to train and score people’s risk of recidivism.
  • Drs. Joy Boulamwini and Timnit Gebru’s paper on Gender and Racial Bias in Facial Recognition Software systematically analyzed facial recognition software developed by major tech corporations and proved that they were highly inaccurate for darker skin tones and women. The significance of this research can be viewed in the documentary Coded Bias on Netflix.
  • Time reported that OpenAI used Kenyan workers paid less than $2 per hour to label extremely violent, sexually explicit, and abusive language and images in order to train a detection algorithm to remove that kind of language from larger datasets. The work was vital to making sure that ChatGPT and other OpenAI models didn’t spew racist and violent output as some previous chatbots have done (e.g. Microsoft’s Tess).
  • An investigation by The Markup found that lenders in 2019 were more likely to deny home loans to people of color than to white people with similar financial characteristics — even when we controlled for newly available financial factors that the mortgage industry has in the past said would explain racial disparities in lending, Black applicants are likely to be denied. Specifically, in comparison to similar white applicants, lenders were 80% more likely to reject Black applicants, 70% more likely to deny Native American applicants, 50% more likely to turn down Asian/Pacific Islander applicants, and 40% more likely to reject Latino applicants.
  • Researchers analyzed the extent of racial underrepresentation in dermatological datasets. They concluded that this leads to biased machine-learning models and inequitable healthcare.
  • Algorithmic bias blocked life-saving care for Black patients waiting for kidney transplants. In 64 cases, patients’ recalculated scores would have qualified them for a kidney transplant waitlist. None had been referred or evaluated for transplant, suggesting that doctors did not question the race-based recommendations.
  • Copyright issues abound:
    • Getty Images filed a class action lawsuit against Stable Diffusion for stealing over 12 million photos
    • Matthew Butterick filed a class action lawsuit against Microsoft, GitHub, and OpenAI for the CoPilot AI coding assistant. “This is the first step in what will be a long jour­ney. As far as we know, this is the first class-action case in the US chal­leng­ing the train­ing and out­put of AI sys­tems. It will not be the last. AI sys­tems are not exempt from the law. Those who cre­ate and oper­ate these sys­tems must remain account­able,” he said in a press statement.
  • And let’s not forget, the 2021 paper that called attention to the Dangers of Large Language Models 🦜. Some of the critiques of this paper included the climate impact of training massive models contributing to climate injustice for marginalized communities, the reliance on out-sourced labor as mentioned in the Time article above, and the crystallization of social biases due to training on biased data.

In General: Ethical Debt

In general, the current issues with so-called AI stem from the “fail fast and break things” approach whereby companies are incentivized to produce models quickly for a profit.

By moving quickly, teams incur significant Ethical Debt, thinking that they might get back to it later. To move fast they:

  • Skip significant data documentation tasks
  • Ignore biased outputs
  • Focus primarily on monetary ROI, potentially ignoring the environmental, social, and economic impacts of the models after building
  • Gather massive amounts of data from the internet without regard for lagging copyright rules.

The origin of the Slow AI Movement

There are two conversations about Slow AI, both rooted in the idea that slowness leads to more ethical outcomes:

  • Slowing the User Experience of AI systems which seems to originate in the Human-Computer Interface (HCI) community and is based on ethical technology design
  • Slowing the development of AI systems in favor of responsible AI practices, which originates with Timnit Gebru, who is credited with starting the Slow AI Movement through the foundation of DAIR in 2021.

Building a Slow AI Movement

As with many other slow movements, Slow AI is a call to move slower and more deliberately through AI development, incorporate responsible AI practices, and take time at each step to analyze ethics, impacts, costs, etc.

Taking a slower approach to the AI/Data Science lifecycle is not only more responsible and ethical, but will produce better outcomes.

As I mentioned above, there are two ways to look at slow AI:

User Experience

Some HCI experts advocate for designing slower technology and promote the use of “speed bumps” in user interfaces to encourage people to pause and consider their decisions.

These experts say that a slower approach is more ethical since it takes into consideration the fact that many people click through screens, such as terms of service, without reading them. It uses the interface to ensure some level of transparency and agreement between the user and the app developers about how the data will be used.

Deliberate and Responsible Development of AI

In the other aspect of Slow AI, we must apply deliberate and responsible best practices to the entire lifecycle of AI systems. We must place significant emphasis on documentation and transparency throughout the machine learning lifecycle as keys to ethical AI.

Here are a number of ways that you can start thinking about Slow AI if you run a team or a company:

  • Ask “Should we build this?” early and often
  • Create an internal AI Ethics team, develop a Code of Ethics, and develop a data project requirement checklist for your teams
  • Conduct internal audits of ML systems
  • Delina Ivanova gave great advice on episode 26 of the MLOps podcast. She recommended meeting with stakeholders early and often to understand their needs and understand the business priorities. Figure out where you can make a difference quickly for stakeholders to buy yourself the freedom to spend more time on projects.
  • Encourage deliberate development by advocating for more time spent on documentation and review at each step of the process.
  • Determine how your data labeling tasks are being outsourced. Check to see if your vendor is outsourcing and verify that they are paying fair wages.
  • Standardize your processes: Follow a Data Science Life Cycle (DSLC), and create documentation guidelines that include Datasheets, Model Cards, and more.
  • Hire a diverse team. The only way to ensure that technology works for all people is to have representation in teams building it.
  • Empower your team to share their lived experiences and raise concerns. Your diverse team means nothing unless you embrace it and create a culture where everyone feels welcome and heard.

Here are a number of ways that you can start thinking about Slow AI if you are an individual contributor or independent researcher:

  • Learn about a Data Science Life Cycle (DSLC) and about recommended documentation requirements throughout the life cycle. This will differ depending on which one you choose. (CRISP-DM is my favorite!)
  • Consider doing research and giving a presentation to your team about Responsible AI. Better yet, make it a quarterly event! This will be an excellent professional development activity and a chance to stay up to date on the topic while educating your team!
  • Spend adequate time in the business understanding and data understanding phases of the project. Ask a lot of questions. Document thoroughly!
  • Start simply when building models, with as little complexity as possible, and build from there.
  • Advocate for increasing diversity on the team: diversity of culture, background, education, work experience, and lived experience. Then, support the diverse voices on the team!
  • Avoid anthropomorphizing AI through imagery and language as these can be misleading. Consider using Better Images of AI.

Data Science Life Cycles Support Slow AI

More ethical outcomes can be achieved by standardizing the process and outputs of Data Science. Standardization creates an expectation of what types of documentation is needed at each step of the process. Documentation leads to transparency.

Adopting a DSLC model like CRISP-DM is the first step in standardizing your processes.

The Benefits of Slow AI

Aside from the benefits mentioned above, organizations and researchers experience some additional benefits from following a methodical and responsible process:

  1. Improved accuracy and reliability of AI systems
  2. Reduced bias and discrimination in decision-making
  3. Enhanced trust and transparency in AI systems
  4. Better user experience and satisfaction
  5. Increased environmental sustainability and efficiency

The Cost of Slow AI

Slow AI is, well, slower. It will cost companies more money and so companies don’t have an incentive to develop responsible AI. That’s not to say that all unregulated AI systems are causing harm, but plenty are and few of them are vetted beyond current legal requirements.

“Slowing the pace of AI might cost companies money,” according to Timnit Gebru in a 2021 interview with WIRED. “Either put more resources to prioritize safety or [don’t] deploy things,” she added. “And unless there is regulation that prioritizes that, it’s going to be very difficult to have all these companies, out of their own goodwill, self-regulate.”

Conclusion

In a field like Data Science, Machine Learning, and Artificial Intelligence, data professionals have the power to influence the data culture. Let’s advocate for fairness, accountability, and transparency in AI systems.

Anyone working on a data team can help change the culture by

  1. Staying tuned into ethics discussions
  2. Following industry best practices
  3. Follow the research of conferences such as Fairness, Algorithmic Accountability and Transparency (FAAccT) Conference

Leave A Comment

Let me know what you think!

Related Posts