Every seven years or so, a new trend appears in the world of technology to continue to develop and penetrate much of the technology that we use in our daily life or what is imposed on us from it, whether through the Internet or government services and others. This is the case with artificial intelligence (AI) and face recognition technologies these days. A technology that is constantly evolving, companies adopt it differently, and everyone wants it. But all of this comes with risks. What are they?

Artificial Intelligence is a major problem and a threat to privacy, so are you ready to sacrifice?


Self-developing technology, more than you might think

Artificial intelligence, major problems and a threat to privacy. Ready to sacrifice?

What is new and distinctive about artificial intelligence technologies is that they are self-learning. This is what makes them really "smart". Today's technology is undoubtedly better than yesterday. Tomorrow it will be better and more accurate. But how does this technology evolve? There are several methods which differ with each company.

As Apple relies heavily on the power of its processors and neural account chips to create artificial intelligence accounts locally on the device without having to send too much information to Apple. With the aim of preserving user privacy. But this makes it difficult for the company to develop the technology as it collects so little information, which the user can stop collecting as well. Leaving the task of training AI largely to Apple's engineers.

As for Google, the king of artificial intelligence, it relies on the work of all processors in its own (cloud) computers that connect to industrial devices via the Internet. This provides the company with the ability to create highly complex calculations without the need to develop special processors such as Apple. Also, Google collects data from everyone and everything to develop its own artificial intelligence. Search engine searches, image searches, Google images, and its face recognition and classification feature. Everything trains artificial intelligence.

Even those pictures on the Internet that you want to "make sure that you are a human" and then ask you to select all the pictures that contain a bus. These images are actually training the AI ​​to recognize objects. Such as bus or traffic lights and others.


Racism problems

Racism appears to be a problem that will take time to solve. Even in the tech world. There have been several incidents in the news in which artificial intelligence systems have drastically altered their results based on skin color and facial characteristics. The last one was in April, when Google's artificial intelligence flagged the correct sign when seeing a picture of a white person holding an electronic thermometer. While he recognized the same image, but with a black-skinned person, as a person carrying a weapon.

Maybe we really need everyone's effort to solve this problem. Perhaps it is the natural outcome of leaving technology development to the West only.


Always watched

One of the most important uses of artificial intelligence is face recognition techniques. And it's not limited to automatically unlocking your phone or categorizing photos by people. Rather, it is used in many applications. Among the most influential of these are government surveillance systems. Whether to monitor streets or installations, and others.

Perhaps this point is considered controversial because due to the modernity of technology, we do not have in most of the world explicit laws and technical rules governing the use of these systems by companies and governments. There are fears of misuse. Imagine a private company that has the power to purchase a system that can access a database that only recognizes your face when you pass in front of a building. To know all your stored information. Or perhaps a government may misuse technology to target groups over the other with the security grip or something else. All are possibilities.

Echoing these doubts and concerns, IBM, a major AI leader, announced that it had stopped selling, developing, and even researching face-recognition technologies.


what is the opposite?

Of course, it's not just a corporate scheme of sorts to endanger users' privacy. Rather, it is the basis for developing the services that the user enjoys. Starting with the smart services provided by Google Photos, through the great capabilities of the new cameras, whose hardware development has slowed due to the reliance on artificial intelligence to improve the image, and the end of the smoothness of cloud systems and autonomous cars.


Are you ready to pay?

This is the important question. Are you (my friend) as a user of all these services, ready to risk your information or some of your privacy or freedom of movement without constant monitoring in order to enjoy these technologies?

Or should we slow down and reconsider how AI is developing and the technologies that are being used to improve it? Especially since what we mentioned is only part of the technology problems.

It is clear that AI holds many promises of powerful capabilities for the future. It is certain now that the engine of evolution in it must slow down a little until we review the rules. And we agree to new social contracts that guarantee the best adaptation of technology. So that we get its advantages while minimizing the risks as possible.


What is your position in the conflict between evolution and privacy? Do you think some sacrifice is okay?

Sources:

TheVerge | AlgorithmWatch | OnMSFT

Related articles