Thoughts on "A People’s Guide to AI"


Thoughts on "A People’s Guide to AI"

I first heard about the short book "A People’s Guide to AI" about six months ago, and I finally read it. I enjoyed the book, and decided to share a few informal thoughts.

The book is an introduction to artificial intelligence, algorithms, machine learning, and deep learning with an equity frame. It provides basic definitions, as well as accessible examples of AI applications (e.g., spam filters, facial recognition, language translation, and recommendation engines used by Netflix & Amazon).

The pervasiveness of AI applications across all aspects of our lives makes AI powerful in shaping our future.

While the accessibility of the book makes it valuable, the authors also raise penetrating questions. A few include:

"What does fairness look like when computers shape decision-making?"

"Who is creating the future, and how can we ensure that these creators reflect diverse communities and complex social dynamics?"

"How do you responsibly collect data in a way that respects everyone’s privacy and consent?"

The book stimulated a few thoughts, many of which, I’m sure, have been raised by others better versed in this area than I am:

  • Pattern Recognition and Discrimination: With AI, machines "successfully recognize patterns and predict what will come next." While this has been important to Black liberation (e.g., Harriet Tubman predicting conditions and pursuers during an escape, SCLC predicting a violent overreaction by White police in front of national media), what’s the difference between pattern recognition by machines and unfair racial discrimination? How do we distinguish unconscious bias from algorithms? What if those designing the algorithms possess significant unconscious bias? If race is not just skin color but a varying bundle of sticks (e.g., dialect, neighborhood, income, cultural preferences (music, TV, movies, books, magazines, sports, food, consumer brand preferences), religion, social networks, education), why can’t machines use these data points to recognize patterns and shape the options of individuals, and collectively communities (e.g., employment options, educational options, financial products). How do we distinguish between liberating algorithms that make better decisions than status quo "algorithms" (e.g., credit scores, discriminatory job qualifications that don’t measure potential), and oppressive algorithms that "track" Black communities into particular dead-end pathways?

  • Collateral Damage of Both AI & Regulation of AI: As AI is developed and deployed to solve societal problems, how do we ensure that Black communities are not "collateral damage"? How do we prevent attempts to regulate AI from either reinforcing inequities or preventing the use of technology to eliminate inequity?

  • Surveillance of Black Communities: The book’s authors note that "people of color in lower-income spaces...tend to be over-surveilled," and accurately flag flawed AI applications like "predictive policing" that does not measure actual crime, fails to account for selective arrests by police, and increases arrests in Black communities. Could regulation that purports to protect privacy also increase disparities in surveillance? If Black communities disproportionately consent to share data to obtain goods and services at reduced prices (e.g., access to job sites, smartphones, entertainment), would this broaden surveillance disparities so that companies and government can more easily monitor and control Black communities? At the same time, how do we grapple with the reality that such consent gives many in Black communities access to technology?  

  • What is Meaningful Consent?: How can one give meaningful consent to data without understanding what can be done with the data and without understanding the underlying algorithms? How can a non-tech expert ever have a sufficient understanding to give meaningful consent?

  • Relative Nature of Interests: While proponents fashion AI as a tool designed to help solve problems, isn’t this relative? For example, the productive use of property by one landowner (e.g., a pig farm) can adversely affect a neighbor (e.g., a residential homeowner who doesn’t want to smell pigs). Doesn’t AI empower those who have the resources to apply technology to solve their problems (employers, retailers, etc), and give them advantages over those who do not control the resources and technology (e.g., many in Black communities)?

  • Who is Designing the Systems?:  How can Black communities ever really trust AI if Black people do not play a significant role in the system design?  Even if they don’t do so out of conscious self-interest, why won’t designers implicitly assume that most users are like themselves and create systems accordingly? The well-documented misidentification of Black people by facial recognition technology is but one illustration of this problem.

  • How Can Black Communities Have A Say?: How can Black communities really have a say in AI that is used in their communities? Why wouldn’t a company launch a lucrative product or use such a product that cuts costs even though it is flawed (e.g, predictive policing, risk assessment)?  Why is any concern about structural inequity simply a "public relations" problem for a company to solve, rather than a fundamental problem that warrants alteration of or discontinued sale or use of the product?

  • How Can We Use Algorithms to Affirmatively Eliminate Structural Inequity?: The book asks "When we are dreaming up our own machine learning algorithms, how do we make sure that they aren’t causing harm or reinforcing existing structural inequalities in society?" Perhaps there is a bigger question. Why wouldn’t we devise algorithms that affirmatively eliminate structural inequalities, rather than just containing harm? Motives based on profits could consciously or unconsciously prompt designers to ignore the harm caused by their products–we’ll never catch all harms. If we’re just trying to mitigate harm aren’t we losing ground? What are simple strategies to acknowledge existing structural inequities and affirmatively design systems to counteract them?

  • Solving Challenges Faced by Black Communities: What are the biggest problems that confront Black communities, and can Black communities use AI as a tool to solve them? How can AI help Black people build wealth, set a policy agenda, hold politicians accountable, and live healthy lives? How do we get AI leaders invested in prioritizing these problems? How do we empower people in Black communities to use AI if technical skills are not widely distributed? How can Black communities call out unfairness in the application of AI and insist on transparency, and at the same time harness AI to benefit Black communities?

  • How Do We Avoid Paternalism in Any Attempt to Use AI to Advance Black Communities?:  If we found a new algorithm could disproportionately help Black children, should we deploy it?  What if we found the same with regard to Black parents/adults? Who should make these decisions? Companies? Federal or state governments? Local governments in predominately Black cities and counties? An individual opting into a system operated by a trusted third party?

  • Black Economic History: How do we look at AI in the context of other economically-motivated innovations that have shaped Black communities, such as the cotton gin/slavery, the industrial revolution/the Great Migration, and outsourcing/deindustrialization?

  • Collaborations: To what extent does AI pose unique challenges that need to be addressed in the unique context of Black communities? What problems offer opportunities for collaborations with Tribal, Latinx, Asian American, and other communities?

The book ends on this note, followed by the quote from James Baldwin:

"We are still in the beginning of our time with widespread AI. For this reason, our current moment is extremely important. The rules and policies surrounding the technology haven’t been written yet, and we don’t completely know what the long-term effects of AI and machine learning technologies will be within our societies. But we do know that we can affect the trajectory of the future if we are aware of what changes we seek and what we are collectively working towards."

"The world is before you and you need not take it or leave it as it was when you came in."

—James Baldwin

Promoted on slideshow: 
I first heard about the short book "A People’s Guide to AI" about six months ago, and I finally read it. I enjoyed the book, and decided to share a few informal thoughts.

The book is an introduction to artificial intelligence, algorithms, machine learning, and deep learning with an equity frame." data-share-imageurl="">