This work packpage aims to address the ethical challenges of building a smart city. The idea of smart city is often assumed to be morally benign and beneficial. To pursue smart city development, according to the Hong Kong government, is to “make use of innovation and technology (I&T) to address urban challenges, enhance the effectiveness of city management and improve people’s quality of living as well as Hong Kong’s sustainability, efficiency and safety.” (Innovation and Technology Bureau 2017). This promise of the smart city is, however, dependent upon the information and data collected from sensors from personal devices and environmental sensors installed by the city’s government.
Like other technologies, such data and its related algorithms are not morally neutral (O’Neil 2016; Eubanks 2018; Susskind 2018). The development of smart cities could be morally problematic due to their pervasiveness, opacity, and diffused accountability. First, smart cities make surveillance more pervasive. Classical surveillance involves targeted scrutiny of groups and individuals in specific spaces such as prisons, schools, or hospitals. People are often aware that they are being watched. However, in smart cities, the use of networked technologies to monitor mobile devices and the ability to aggregate fragment data allows surveillance to take place anywhere. Data generated from our daily urban activities is constantly collected, stored, and analyzed by city governments, engineers and researchers.
The ability to cross-reference large data sets makes it even harder for people to disentangle their data from detection and monitoring (Mittelstadt 2017). This data collection and analysis then form the basis on which community services are provided. Second, the operation of smart cities is mostly opaque to the persons whose data are being harvested. Individuals usually have very little idea, if at all, about how these algorithms work. In other words, residents of smart cities are also living in a “black-box society.” (Pasquale 2015). Finally, the use of algorithms in smart cities could obscure the accountability of these organizations for the harms they inflict on individuals. Take predictive crime control for example. When there is systematic bias in the police-recorded crime data, the predictive algorithms will amplify such bias. If the police focus on ethnic minorities or certain poor neighbourhoods, it is likely that these groups or areas will be over-represented in the police records, leading to certain crime “forecasts” that will subject these neighbourhoods to further policing in the future. Another reason why predictive policing is problematic is it has a feedback loop. More policing often means that more crimes will be detected. This creates exactly the data to justify more policing in the future, and so on (Lum and Isaacs 2016). The overall effect is that the algorithms appear to make “impartial” predictions, but such a conclusion is misleading in important respects
In the literature on the Smart city, ethical dimensions are quite simply assumed. But these uses raise major ethical dilemmas. What are the moral ethical dilemmas underpinning the trust-transparency nexus in this field of study? Can the public sphere tolerate the private management of public goods? These themes are addressed by work package 4. Dr. Kevin Ip prepares a paper for the special issue of China Perspectives on "Trust and the Ethical Challenge of Smart City: the case of Hong Kong".