For AI to truly succeed, tech culture needs to change

Frances Brown

3D rendered illustration of a circuits forming human brain.

Photo by Steve Johnson on Unsplash

AI can power faster, less transparent software that annoys, frustrates, discriminates and endangers. Or it can act as a genuine support to human progress and flourishing.

The latter outcome will not occur unless tech culture changes, from one that blindly chases the next technological advance regardless of consequences to one that prioritises human needs and takes responsibility for outcomes.

The key to this change is addressing some of the fundamental assumptions and issues that dominate the tech world.

One of these assumptions is that a more technologically complex solution will automatically be better than a lower tech or human solution. The effect of this assumption is that complex solutions are pursued and prioritised regardless of how many additional problems the supposed solutions create. The intricacies of the wider human context are either ignored or considered an irritating blocker to technological progress. Humans are simply expected to put up with and manage the problems the tech creates or change in order to fit more neatly with the tech's demands and constraints.

A familiar consequence of this assumption is the proliferation of online banking and the closing of bank branches, despite the fact it leaves large numbers of people unable to access or manage their money. The response to this issue is usually to 'educate' people, rather than to provide a better, more inclusive service that meets everyone's needs. 

This paper details another fascinating example of the effects of this assumption - the authors acknowledge that it’s very difficult to ensure that fully autonomous vehicles are safe to operate around vulnerable road users, but rather than conclude that autonomous vehicles may not be suitable for urban environments, many (though not all) argue that the only solution is to ‘retrain’ road users to notice, understand and respond to a wide range of signals and warnings. The human is expected to accommodate the limits of the technology, even when that means walking on a street becomes an extremely stressful, difficult and dangerous task.

In the world of AI, the effect of this assumption can be seen in the way prominent supporters acknowledge AI’s potential to harm, but argue that because its promised benefits are so great, we either need to have faith that the tech will improve, or we must learn to accept and manage the harm. Bill Gates argues, for example, that while healthcare AIs will inevitably misdiagnose patients, the upside is worth it, because the alternative for many is having no healthcare at all. Hidden in this assertion is the idea that it’s just not possible to provide equal access to healthcare for everyone, but AI can provide a ‘better than nothing’ solution to those who are disadvantaged. He doesn’t discuss what happens when people who have no access to healthcare receive their AI diagnosis - or misdiagnosis - presumably, they have to work that out on their own.

A related cultural issue is the tendency for creators of technology to distance themselves from its effects and consequences - a common practice among those who own social media platforms. The implication of this position is that the creators just build the framework, they’re not responsible for how people use it and have no duty to predict or head off future harm.

In the case of AI, attempts have been made to at best ignore and at worst cover up the homogeneity of the AI community (unsurprisingly, it’s predominantly male and white) and, by extension, the biases and discrimination inherent in systems built on the choices and knowledge of that community. The fact that AI includes the potential for a system to adapt and change on its own creates even greater scope for founders and investors to claim they are not in control. The mantra is that AI ‘can’t be stopped’, an interesting doctrine that implies that the negative consequences are unavoidable because we’re stuck with AI, even if it destroys us.

The truth is that bad, frustrating or dangerous technology does not emerge of its own accord or by accident

Everything that a piece of technology does is the result of a choice or decision made at some point in the design or development process. If the creators of a technology are aware of potential harm but decide to continue regardless, questions need to be raised as to their motivation and culpability. Claims that consequences are impossible to predict are not an acceptable excuse. Blindly creating a potentially harmful product should be seen for what it is - reckless, foolish and, in some cases, malicious.

AI will no doubt play a large role in our future, but how beneficial that role is depends on the ability of the tech world to let go of unfounded, overblown and self-serving assumptions. Ideas and predictions need to be injected with a healthy dose of realism about the problems AI can actually solve within the complexities of the larger human context. The focus must be on what humans genuinely need, even if in some (or many) situations, AI isn’t the answer.