So you might be okay with Artificial Intelligence and Machine Learning influencing what movies you watch on Netflix or what songs you hear on Spotify. But what if such technologies are implemented in systems that make decisions on whether or not you receive a loan, parking ticket, or get arrested? Even in seemingly harmless implementations, ML often has far-reaching consequences, deeply interfering with social dynamics, political directions, cultural biases, and much more.
Designers need the skills and tools to foresee, account for, and prevent harmful consequences of technology. This paper explores a smart city lights concept that hopes to be a valuable & ethical use of machine learning for the city of Amsterdam. The system revolves around dimming city lights to save energy depending on what modes of transportation are recognized in the streets. The paper explores the technical criteria necessary for the machine learning model, how to account for mislabeling, contestability, and how such a system will affect social dynamics, cultural norms, and the co-existence of people in Amsterdam.
You can download the paper below:
Link to paper