AI May be a Force for Good – But Now We're Headed for a Darker Future.

Artificial Intelligence (AI) is already reconfiguring the world inconspicuous ways. Data powers our global digital economy, and data trends are exposed to AI technologies. Smartphones, smart homes, and intelligent cities affect how we work and communicate, and AI systems are increasingly involved in recruitment decisions, medical diagnosis, and court rulings. It depends on your viewpoint on whether this scenario is utopian or dystopian.

The possible hazards of AI are repeatedly reported. Killer robots and mass unemployment are common concerns, while some people are even afraid of human extinction. More ambitious forecasts suggest that AI will add $15 trillion to the world economy by 2030 and ultimately lead us to some social nirvana.

We need to recognize the effect these developments have on our communities. One critical issue is that AI systems perpetuate current social prejudices – with adverse consequences. Several infamous examples of this phenomenon have gained widespread attention: state-of-the-art automated machine translation systems that generate gender-based outputs, and image recognition systems that identify black people as gorillas.

These problems arise because they use mathematical models (such as neural networks) to identify patterns in large sets of training data. If this data is badly skewed in various ways, its inherent biases will inevitably be learned and reproduced by the trained systems. Racist autonomous technologies are troublesome because they can marginalize groups such as women, ethnic minorities, or the elderly, thus compounding current social imbalances.

For example, if AI systems are trained on police arrest data, any conscious or implicit bias manifested in the current arrest patterns will be replicated by the "predictive police" AI system trained on that data. Recognizing the severe consequences of this, many authoritative organizations have recently recommended that all AI systems should be trained on unbiased data. The Ethical Guidelines released earlier in 2019 by the European Commission made the following recommendation: “When data is collected, it can contain socially constructed prejudices, inaccuracies, and errors. This needs to be discussed before practicing for any data collection.”

Dealing with Biased Data

It all sounds fair enough. Unfortunately, it is sometimes merely challenging to guarantee that such data sets are impartial before training. Thus, a concrete example should clarify this.

All state-of-the-art machine translation systems (such as Google Translate) are trained in pair sentences. The English-French scheme uses data that associates English phrases ("she is tall") with similar French phrases ("Elle est grande"). There may be 500 m of such pairings in a given set of training data, resulting in one billion separate sentences.

All gender-related biases would have to be eliminated from a data set of this nature if we were to prevent the resulting method from generating gender-based outcomes such as the following:
Input: The women started the meeting. They worked efficiently.
Output: Les femmes ont commencé la réunion. Ils ont travaillé efficacement.

The French translation was created using Google Translate on 11 October 2019 and is incorrect: 'Ils' is the male plural subject pronoun in French. It appears here despite the sense that indicates that women are referred to. This is a classic example of a masculine default favored by an automated system due to bias in the training data.

In general, 70% of the gendered pronouns in translation data sets are male, while 30% are female. This is because the texts used for this reason tend to refer more to men than to women. In order to prevent translation systems from replicating these existing biases, specific phrase pairs would have to be removed from the data, so that both male and female pronouns were 50 per cent/50 percent on both the English and French sides. This would prevent the system from assigning higher probabilities to male pronouns.

Nouns and adjectives will also need to be balanced by 50 per cent/50 percent, of course, as they can imply gender in both languages ('actor,' 'actress,' 'new,' 'new') so on. However, this drastic down-sampling would inevitably drastically reduce the training data available, thereby reducing the quality of the translations generated.

And even if the resulting data subset were completely gender-balanced, it would still be distorted in all kinds of other ways ( e.g., race or age). It would be difficult to eliminate all these prejudices. If one person just spent five seconds reading every one billion sentences in the training results, it would take 159 years to review all of them – and that implies a willingness to work all day and night, without lunch breaks.

An Alternative?

Therefore, it is impractical to expect all training data sets to be unbiased before AI systems are designed. Such high-level requirements usually assume that "AI" is a homogeneous cluster of mathematical models and algorithmic approaches

Different AI tasks involve different types of systems. And the full extent of this diversity hides the real problems presented by (say) highly distorted training results. This is regrettable since it means that other solutions to the issue of data bias are ignored.

For example, a trained machine translation system's biases can be greatly reduced if the system is adapted after being trained on a broader, ultimately biased data set. This can be done using a much smaller, less skewed data set. Hence, most of the data may be strongly biased, but there is no need for a system trained on it. Unfortunately, these strategies are seldom explored by those responsible for establishing recommendations and regulatory mechanisms for AI research.

If AI programs merely exacerbate current social imbalances, they impede rather than promote meaningful social change. If the AI systems that we increasingly use daily were much less biased than we are, they could help us identify and confront our own lurking biases.

This is definitely what we should be working for. And so AI developers need to think much more carefully about the social implications of the systems they create. In contrast, those who write about AI need to understand in more depth how AI systems are actually planned and developed. Thus, if we ever approach either a technical idyll or a catastrophe, the former would be preferable.

No comments

Comment

Your email address will not be disclosed. The required fields are marked with*.

Related recommendation

No related articles!

微信扫一扫,分享到朋友圈

AI May be a Force for Good –  But Now We're Headed for a Darker Future.