Written by 4:05 am Tech News

Here are the 7 most dangerous technology trends in 2020 we should know about


As we get to associate and enjoy with the latest technology trends, We should know what they bring to us in terms of negative impacts. Here is the list of dangerous technology trends that people should be aware of.

1 Drone swarms

Image by nextgov.com

The British, Chinese, and United States armed forces are in the competition researching/testing how they could use drones for military purposes.

The idea was inspired by a swarm of insects working together:  drone swarms could change how war will be fought, whether it be by overwhelming enemy sensors with their numbers or to effectively cover a large area for search-and-rescue missions.

The distinction between swarms and how the military uses drones today is that the swarm can coordinate itself based on the situation and through interactions to achieve a goal. While this technology trend is still in the experimental phase, the possibility of a swarm that is smart enough to organize its actions is becoming closer to reality.

Apart from the positive benefits of drone swarms to reduce casualties, at least for the offense and to accomplish a search-and-rescue target more effectively, thinking of robots armed with weapons to kill being able to “think” for themselves is a nightmares fodder.

2 Spying smart home devices

Image by realtor.com

To be as useful as possible, smart home apps need to listen to and record details about you and your daily habits and by that you have allowed a spy to enter your home by connecting the Echo to your room as a radio and alarm clock (or any other smart device connected to the Internet).

All smart devices collect information about your behaviors, such as your viewing history on Netflix; where you live and which path you take home so that Google can tell you how to avoid traffic; and when you usually arrive home so that your smart thermostat can make the temperature you want in your family room is stored in the cloud.

This knowledge will make your life more convenient, but there is also the potential for abuse. By principle, before they activate, virtual assistant systems listen for a “wake word,” but there are times when you might think you’ve said the wake word and begin recording. Each smart device in your house, like gaming consoles and smart TVs, maybe the entry point for your personal information. There are some defensive techniques like covering cameras, switching off devices when not necessary, and mutating microphones, but none of them are 100% foolproof.

3 Face recognition

Image by vpnsrus.com

There are some incredibly useful facial recognition uses, but it can be used for dangerous reasons easily. China is a prime example for monitoring and racial profiling by facial recognition technology. Chinese cameras not only spot jaywalkers, but they also have monitored and controlled Uighur Muslims residing in the country. Russia’s cameras search the roads for “people f interest,” and there are rumors that Israel is monitoring Palestinians in the West Bank. Facial recognition is plagued with bias in addition to monitoring people without their knowledge. If an algorithm is practiced on information that is not large, it is less effective and will make more misidentifying.

4 AI cloning

clonetechnology trends
Image by dailydot.com

With Artificial the help of Intelligence (AI), all you need to build a replica of someone’s voice is just an audio snippet. Similarly, AI may take the number of photos or videos of an individual and then create a completely different, cloned, image that seems to be an original. Creating an artificial YOU is becoming quite easy for AI and the results are so convincing that our brains have trouble distinguishing between what is real and what is cloned. Deep-fake technology that utilizes facial scanning, machine learning, and artificial intelligence to construct images of actual people doing things or saying things they’ve never seen now approach “normal” people.

Celebrities used to be more likely to be victims of deep-fake technology because they had plenty of video and audio to use to train the systems. Nevertheless, the technology trend has advanced to the point that the production of a convincing fake video does not need as much raw information, and there are many more photos and clips to be used by ordinary people from the internet and social media networks.

5 Ransom ware, AI and Bot-enabled Blackmailing and Hacking

hackingtechnology trends
Image by blog.superb.net

When high-powered technology falls into the wrong hands, illegal, unethical, or destructive practices can be done very efficiently. Ransomware is on the rise according to the Cybersecurity and Infrastructure Security Agency (CISA) ransomware is used to block access to a computer network until a ransom is received. Artificial intelligence can simplify operations to more effectively execute them. The negative impact could be extreme if those activities, such as spear phishing, are to send fake emails to manipulate people into giving up their private information.

Once the software is built, there is the little-to-no cost to keep repeating the task over again. AI can quickly and efficiently blackmail people or hack into systems. While AI plays a major role in combating ransomware and other attacks, cyber criminals are also using it to commit crimes.

6 Smart dust

 Microelectromechanical systems (MEMS) have detectors, control devices, automated power sources, and cameras in them, and it only the size of a grain of salt. Also called motes, this smart dust has plenty of positive uses for medicine, security, and more, but if used for evil purposes, it would be scary to command. Although spying on a recognized enemy smart dust could drop into the positive category, and also it would be just as easy to invade the privacy of a private citizen.

7 Fake news bots

Bottechnology trends
Image from commons.wikimedia.org

GROVER is an AI program that can compose fake news from just a title. AI systems like GROVER produce more accurate reports than those produced by people. OpenAI, a non-profit corporation sponsored by Elon Musk, developed “deepfakes for text” which deliver such good news stories and fiction plays, the group initially decided not to publicly release the findings to prevent dangerous technology misuse. It can have serious ramifications for people, companies, and governments if false stories are advertised or published as legitimate.

(Visited 66 times, 1 visits today)
Last modified: October 22, 2019