As Shneiderman writes in his new book, Human-Centered AI, he’s somewhat modified his stance in the intervening years, giving greater weight to ending the tedium of performing the same tasks over and over. However, he remains sceptical that AI will surpass or successfully imitate human intelligence – scepticism that extends to new, highly contested applications such as emotion detection. What is important to Shneiderman, then and now, is designing computer systems so they put the user at the centre. Incorporating human factors into consumer software became a widespread industry concern in the 1990s, when user interfaces shifted from requiring arcane, precisely typed commands to directly manipulating the graphical icons everyone uses today. Shneiderman argues that AI should be no exception, and that a focus on developing AI that helps people will dissolve much of the fear of lost jobs and machine control. SEE: What the metaverse means for you and your customers As an example of the distinction he’s making between more usual approaches to AI and the human-centred approach he favours, Shneiderman begins by comparing Roombas and digital cameras. Users have very little control over the Roomba, which is designed with a minimalist user interface – that is, a couple of buttons – and does the job of vacuuming carpets on its own without user input. Digital cameras, on the other hand, enable amateurs to be far better photographers while giving them many choices; its design allows users to explore. While people love Roombas, the same ‘rationalist’ approach when embodied in the form of data-driven systems becomes limiting and frustrating, whereas the ’empiricist’ approach empowers humans. In the bulk of the book, which grew out of 40 public lectures, Shneiderman works methodically through practical guides to three main sets of ideas. First, he lays out a framework to help developers, programmers, software engineers, and business manager think about AI design. Second, he discusses the value of the key AI research goals – emulating human behaviour and developing useful applications. Finally, he discusses how to adapt existing practices of reliable software engineering, safety culture, and trustworthy independent oversight in order to implement ethical practices surrounding AI. I’m not sure people are still as worried about AI and robots taking their jobs as they are concerned that crucial decisions about their lives will be made by these machines – what benefits they qualify for, whether their job or mortgage applications are seen by prospective employers and lenders, or what pay their work for a platform merits. Shneiderman discusses aspects of this, too, calling attention to efforts to incorporate human rights into the ethics of AI system design. Few books on AI discuss the importance to good design of applying the right sort of pressure to the corporate owners of AI systems to push them into social fairness. This one does. RECENT AND RELATED CONTENT DeepMind’s ‘Gato’ is mediocre, so why did they build it? The EU AI Act: What you need to know Google I/O: To build better AI, Google invites others to join its AI Test Kitchen Qualcomm plunges into the robotics market with new platform IBM CEO: Artificial intelligence is nearing a key tipping point Read more book reviews
Four Internets, book review: Possible internet futures, and how to reconcile themCogs and Monsters, book review: Retooling economics for a digital worldA Biography of the Pixel, book review: The life and times of ‘digital light’Don’t Look Up, movie review: Smart, funny and depressingKings of Crypto, book review: How Coinbase helped to reshape the future of finance