Researchers show a systematic study of how adversarial attacks on the best object detection frameworks can be moved to other frameworks. They train patterns that suppress the objectness scores made by various commonly used detectors and groups. They do this by using standard detection datasets. Through much testing, researchers found how well adversarially trained patches work in both white-box and black-box settings and how well attacks can be transferred between datasets, object classes, and detector models. Lastly, they show a detailed study of physical world attacks using printed posters and clothes worn, and they use different metrics to measure how well these attacks work.
Scientists have made a real-life “invisibility cloak” that fools artificial intelligence (AI) cameras and stops them from recognizing people. Researchers have made a sweater that “breaks” AI systems that recognize people and makes a person “invisible” to AI cameras.
According to the research group, this stylish sweater is a great way to stay warm this winter. “It has a modern cut, a waterproof microfleece lining, and anti-AI patterns that will help you hide from object detectors.”
The researchers say, “In their demonstration, they were able to trick the YOLOv2 detector by using a pattern trained on a COCO data set and a carefully made target.”
A “see-through” coat that you can wear Gagadget.com says that the scientists who worked with Facebook AI’s AI started by looking for flaws in machine learning systems.
But the result was a colorful print on clothes that AI cameras can’t see. This makes it so a machine can’t see a person.
The scientists say that most research on real-world attacks on AI object detectors has focused on classifiers, which label an entire image as a whole, rather than detectors, which find where objects are in an image.
AI detectors take into account thousands of “priors” (possible bounding boxes) in the image that are in different places, are various sizes and have different aspect ratios. To trick an object detector, an adversarial example must trick every prior in the image, which is a lot harder than tricking a classifier’s single output.
The researchers said that they trained the computer vision algorithm YOLOv2 on the SOCO dataset and found a pattern that helps to identify a person. The team then made a pattern that was the opposite of the first one and turned it into a picture of a sweater print. So, the sweater becomes a “cloak of invisibility” that can be worn. It makes the person wearing it invisible to detectors, so they can hide from systems that look for them.
Check out the Paper, Website, and Reference Article. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.
Ashish kumar is a consulting intern at MarktechPost. He is currently pursuing his Btech from the Indian Institute of technology(IIT),kanpur. He is passionate about exploring the new advancements in technologies and their real life application.