AI & A Photo to make fake Videos



Since the release of AI, people have used it for a variety of purposes. However, programming creates tension throughout the United States and other countries to the point that no one knows what sources are trustworthy. Elite programmers have produced a method to create extremely realistic videos of anyone doing or saying anything. An institute, which is the center for Samsung AI, released information regarding that they could animate photos of people by training the AI on specific data sets. They then posted their results on youtube which were examples of their work. 


The work was said to be quite similar to deep fakes, a combination of the terms “deep learning” and “fake,” which are convincing fake videos and audio made using cutting edge and relatively accessible AI technology. They use a machine learning method known as GAN (Generative adversarial networks.) The spread of these videos is raising concerns for everyone from political leaders to the US intelligence community and the public themselves. Though the AI can only manipulate the neck to head, people still will be fooled into believing these videos. Voters and Representatives could be visually seen on a screen communicating the complete opposite messages to their true interests. Which would create confusion within the Democratic and Republiacan parties. One individual controlling the public through fake videos could lead to economic disasters, mis intentions toward businesses, and many more problematic situations. Considering that almost everyone has a phone and uses social media, the majority would be lead to false information resulting in a variety of disasters. 


A probable solution that is in the near future is police interventions through the internet. The idea itself is fairly new, but finding fake videos is even harder if people visually try to find them. Using AI to detect such videos would be the ideal to find fake videos. But even then, programmers and current software can only do so much. So what will be the solution to this problem? Will we be able to detect false videos, or will they slip through the internet undetected?