Topic > Artificial Intelligence and the Future of Human Rights

Artificial intelligence seems to be the current buzzword across the world with the dominance of artificial intelligence being felt everywhere around us. The usual response to artificial intelligence and the future is that of fear as most people believe that with the rise of artificial intelligence and its future development the number of jobs available for humans would become negligible if not extinct with personalities such as Elon Musk and Bill Gates has already expressed his reservations against artificial intelligence. Well, we will not discuss this topic in this article. We would rather like to delve deeper into the legal aspect of artificial intelligence, i.e. the possibility of giving personality to these artificially intelligent machines. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an Original Essay There is a huge possibility that someone may have read an article on the Internet, a good article that some might say without even realizing that it was completely written by an artificially intelligent machine. Recently the Kingdom of Saudi Arabia became the first country in the world to officially grant the rights of a normal human being to a robot which opened a Pandora's box around the world, whether the future is here and it is time to seriously consider the question relating to the existence of rights to a machine. The issue of car rights is a fairly current topic that is still in its nascent stage, but one may wonder why this topic has suddenly become a hot issue. Well, the answer may be that since new advances in technology and science have made it possible for humans to create objects that resemble a living human, and the possible bond that people may feel towards them may have led to the he current debate around this particular topic with the immediate case of Saudi Arabia granting the rights to a robot. A question that might immediately arise is: why might robots need rights in the first place when in a technical sense they are non-living beings and more importantly, they do not have what we call consciousness, which are the only two reasons why a entity can be granted rights, but there are examples in the present world that show that rights have been granted to a non-living entity, the most prominent example of this is granting rights to a "society" which in the true sense of word is a non-living entity without making any sense of conscience, but who are still granted these rights with the real reason that it is for the convenience of human beings to easily go about their business. But one might still wonder why it is necessary to grant a robot similar rights to those of a human being, since there appears to be no benefit in providing rights to a robot as there is to a corporation, which may still seem like a good point. the various experiments conducted demonstrate that it may indeed be necessary to grant such rights to a robot and why then a right should be granted to an entity that has no consciousness. But what exactly is consciousness? Different people may define consciousness differently and most agree that consciousness is a very complex phenomenon which can be loosely defined as the ability to react and think through an external stimulus and be the phenomenon that makes an entity a living being from which rights arise. Because Artificial Intelligence does not have the ability to think on its own and can only do what it is programmed to do. But with the ever-increasing progress inrobotic technology and current global examples of Sophia, the humanoid robot that continuously learns as it communicates, shows that the rapidly diminishing line between a human and a robot would seriously challenge our understanding of what distinguishes humans from robots and, if that distinguishing factor is consciousness, then what exactly is consciousness? The discussion on conscience is as fascinating as it is important. Some people even define consciousness as a form of intelligence. If intelligence can be defined as consciousness, then a robot can be called intelligent if it can perform a task without any supervision and if it can constantly improve and learn new things. If these three conditions are satisfied then a machine can be said to be intelligent. But intelligence cannot be equated with consciousness since all animals and plants may not be exactly as intelligent as we define them, yet we can agree that they are indeed conscious. We may actually be trapped in our ignorance if we think of consciousness as something objective and as something binary, as something that is conscious or not. Just as there can be different forms of intelligence, there could also be different forms of consciousness. Part of the issue in this debate is that, of all the potential candidates for consciousness in the animal kingdom outside of humans, octopuses are by far the furthest from humans. Their phylogenetic branch diverged from the human one almost a billion years ago. This means that if they had developed consciousness, they would have had 750 million years to evolve differently from ours. The experience of consciousness for an animal with eight limbs, the ability to camouflage and living underwater should apparently not resemble ours. Consciousness aside, there are other factors that can make us think more seriously is that not providing machines capable of interacting with human beings with rights can actually have an influence on the way humans treat other human beings. To understand how this could happen, a study was conducted by Kate Darling, a researcher at the MIT Media Lab in Cambridge, Massachusetts, using a toy robot dinosaur Pleo which does not seem realistic because it is obviously a toy but is programmed to act and speak in ways that they suggest not only a form of intelligence but also the capacity to experience suffering. If you hold Pleo upside down, he will moan and tell you to stop in a scared voice. In an effort to see how far we could go in extending compassion to simple robots, Darling encouraged participants at a recent workshop to play with Pleo – and then asked them to destroy it. Almost everyone refused. “People are primed, subconsciously, to treat robots like living beings, even though on a conscious level, on a rational level, we understand perfectly well that they are not real.” The experiment demonstrates that human beings have a natural talent for empathy towards a creature that, although not a lookalike of a living being, the mere fact that it can express the sense of emotions even if they are not natural but programmed forces us not to harm them despite knowing that he has no real sense of pain or suffering that he can receive when damaged or destroyed. This conclusion is important from the perspective of a human being and goes on to demonstrate that due to our general empathy towards those things that can feel emotions, which arises from our general empathy towards other human beings and not granting these basic rights to those robots or humanoids that look and behave just like other humans and show emotions can have a serious influence on the?