AIs are Getting Smarter,and Faster. That's Creating Tricky Questions that We Can't Answer

Artificial intelligence (AI) now refers to a clever but limited set of software technologies. However, if artificial intelligence gets more complicated and pervasive in the future, we may be compelled to reconsider the rights and wrongs of how we treat AIs – and even how they treat us.

AIs are now limited to specific jobs such as image identification, fraud detection, and customer support. However, as AIs advance, they will become more autonomous. They're sure to make a mistake at some point. When AIs make mistakes, which is to blame? This subject will concern businesses and thrill attorneys as they try to figure out who can and should be held liable for any subsequent harm.

In the majority of AI-related concerns today, it's clear who's to blame. If you buy an AI and run it right out of the box, it's almost always the manufacturer's fault. It's probably yours if you construct an AI and train it to do something horrible. But it won't always be so straightforward.

The problems arise when these systems acquire memories and gain agency, which means they begin to do things that neither the manufacturer nor the user intended.

"This is where the responsibility gap exists. Within the next ten years, there will be exciting examples like that, where it's unclear who's to blame for actual harm done in the world "Christopher Potts, a Stanford University faculty affiliate at the Stanford Institute for Human-Centered Artificial Intelligence, agrees.

One technique to keep an AI from developing so that the maker disapproves is to design it so that any decisions it makes must be explained to humans. The danger is that by making AI entirely understandable for the general public, we may miss out on many benefits. But, in my opinion, our ability to do that introspection will always be surpassed by our ability to develop ever more powerful models that perform even more remarkable feats. As a result, there will always be a chasm."

We may simply have to accept that we will not always understand why AIs act the way they do and live with uncertainty — after all, we do the same with other humans.

AIs may grow so clever that they will be held legally and morally responsible for their conduct whether we understand them or not. Non-human entities can already be held lawfully liable for misconduct under the law, thanks to a concept known as corporate personhood: Where corporations enjoy the same legal rights and obligations as humans. The same might potentially apply to AIs in the future.

That means that if AIs are found guilty of a crime in the future, we may have to consider whether they should be punished if they don't grasp the rights and wrongs of their conduct, which is often a criterion for criminal liability in humans.

When it comes to punishment, it's also important to consider if AIs have any rights that could be violated by the way they're punished. However, for some, debating AI rights at a time when human rights are still not universal – and AI capacities are decades away, at most, from matching humans' – may appear to be ethics running ahead of technology by a long way.

According to Peter van der Putten, assistant professor of AI at Leiden University and director of decisioning at Pegasystems, questions like these detract from the more immediate challenges of artificial intelligence by focusing on hypothetical scenarios and levels of AI ability that are more science fiction than reality.

"Morality and ethics are crucial, in my opinion. But I'd almost argue that it's too soon to think about giving AIs rights because when we talk about AI taking over the world, being utterly autonomous at some point in the future, or when the singularity occurs, we're disregarding the fact that the future is nowhere "he declares

AI is already being used on a big scale, and its use should be visible, explainable, trustworthy, and unbiased, which is not always the case, according to van der Putten. Questions have been raised regarding the influence of biased AI on everything from healthcare to policing, and even with today's relatively modest AI systems, detecting and correcting biases is difficult. Questions have been raised regarding the influence of biased AI on everything from healthcare to policing, and even with today's relatively modest AI systems, detecting and correcting biases is difficult.

"Before we even consider giving AI rights in the far future, we must first tackle the problem or difficulties at hand, as well as seize the opportunity that AI offers today," van der Putten argues.

Although a human-level AI is a long way off, it doesn't imply we should put off discussing moral issues until then. The moral status of a person is not always determined by his or her IQ. As a result, long before AIs reach our IQ levels, their rights may warrant further consideration.

"More moral status can be graded. It's possible that AI won't be granted personhood for a long time. However, AIs may soon achieve a variety of situations. As a result, they may become cognizant or begin to feel discomfort exceptionally quickly. As a result, for each of those processes, we must ensure that we do not mistreat AIs based on their moral position, "S. states.Matthew Liao is the head of the Center for Bioethics at New York University and an affiliated professor in the Department of Philosophy.

And, in the far future, when AIs achieve capacities that surpass our own, they may have interests and rights that we haven't considered. Academics Nick Bostrom and Eliezer Yudkowsky, for example, pondered whether an AI that perceives time differently than humans had the right to regulate its subjective perception of time. (This has ramifications for people who provide AI with hardware — if an overclocked computer slows down the AI's perception beyond what is tolerable, the AI may be entitled to different hardware.)

As AIs gain moral significance, we may need to address another ethical issue. Could AIs have higher degrees of intelligence and consciousness than humans mean that they deserve higher moral status? If we were to choose between rescuing an AI and saving a person, would we have to keep the AI?

Both yes and no. One argument against such equality is that we don't assign more moral standing to brighter persons or those with higher levels of morality; instead, we presume that every adult has the same moral status and, as a result, should be treated equally. If there was a flood and you only had one person to save, you'd presumably consider factors other than IQ before deciding who to keep. That means that even if AIs' intelligence surpasses ours, it doesn't necessarily follow that their new level will provide them better moral status than we do. There's a case to be made that we should all receive the same punishment because we've all passed the same intelligence test.

The counterargument is that as AIs gain new powers far beyond those possessed by humans and potentially beyond the scope of human conception, those abilities may become so advanced to warrant awarding AIs more moral status.

According to Liao, many philosophers are concerned about whether AIs will one day have a higher moral status than humans, perhaps even personhood."And the answer is that, while I believe they may have a higher moral position, it will not be because they have more intelligence, emotions, or morals; rather, it will be because they will have different traits."

"I'm not sure what those characteristics would be... but they could be something rather unique, and we should recognize them as moral agents who deserve more protection," he argues.

No comments

Related recommendation

No related articles!

微信扫一扫,分享到朋友圈

AIs are Getting Smarter,and Faster. That's Creating Tricky Questions that We Can't Answer