Voice cloning: a new tool in the criminal world. As technology advances, new advantages, as well as disadvantages, are created. Voice cloning technology is now very popular. In the beginning, it was just for entertainment but now this technology has become the cause of hybridization. There have also been recent cases of millions of dollars being embezzled through the use of fake voices in Deepfake technology. But what exactly is this technology of voice cloning or voice copying is still obscure.

This technology basically clones or mimics someone’s voice using a software program of artificial intelligence on a computer, through which an artificial voice is created by mimicking a person’s voice exactly. All you have to do is record your voice for a few minutes. From this, the software knows the sound of his voice, his accent – how that person spoke. Nowadays this technology has become so perfect that it does not seem to be a mechanical activity.

This software mimics the way a person speaks, accents, how fast or slow they speak, how loud or loud the voice is when speaking, the way a person breathes in words, and how light or serious the tone of voice is. Even after learning all the features of a person’s voice through this technology, when a word or sentence is written on the computer keyboard, the computer will present it exactly in the voice of that person – that is, it will seem that he is speaking directly. Not only that, if someone needs a fake voice, this software can also evoke various kinds of emotions – such as anger, fear, joy, love, estrangement, or annoyance.

Vocaloid is created by Rupal Patel. He is the chief executive of the company. He is a professor of communication science and related issues at Northeastern University. Rupal Patel started this business in 2014 with the aim of furthering his medical career. He founded his company based on this technology from the urge to artificially create a voice device for patients who have lost the ability to speak due to illness or after surgery.

But recently a terrible aspect of this wonderful technology has been caught. Experts are concerned that this technology is a fertile ground for cybercrime. Because it will be impossible to understand whether the person you are talking to is a real person or a fake person, and it will be very easy for criminals to trap you. Voices that are duplicated in this way are called “deep neck”, just like the fake videos that are made using digital technology.

Cybersecurity expert Eddie Bobaritsky says, “Until now, when we’ve been talking to someone on the phone, we’ve at least been able to be sure that the person I’m talking to is my familiar voice – at least trustworthy. But now that is changing. The boss called his employee and said, “I need some information. It’s sensitive, confidential information. But the employee thought I knew the boss’s voice. So he gave the information without hesitation. This is a golden opportunity for cybercriminals.”

The idea is that while this technology will be a blessing to voice artists, it will later become a deadly tool for criminals.

 

Read More News

Russia gave great news about vaccination