Threat Research

Troopers - Day 1

By Axelle Apvrille | March 14, 2018

I am currently at Troopers, a well-known German hacking conference in Heidelberg. I had heard many positive reports about this conference, especially their awesome hardware badge, and am glad I finally got to speak there. My talk was on hacking a smart toothbrush, and why it's important to secure any connected device, even those - like toothbrushes - that seem harmless. If you missed my talk, my slides will soon be online; check the Fortiguard Research Centre.

Now, let's focus on some of today's talks.

Next Generation Internet Keynote, Graeme Neilson

After years in infosec, Graeme has sadly come to the pessimistic conclusion that we have so far failed to make things better with regards to IT security:

  • There were poor passwords, buffer overflows, and bad input checking 20 years ago, and it's still the same story today. Wannacry relies on buffer overflows and IoT Mirai tests weak passwords.
  • There are more and more CVEs filed, and despite vulnerabilities being fixed, the number of breaches have also gone up.

The goal of a keynote is to open minds, and this one certainly did. I would nevertheless mitigate his conclusions & statistics because, while it is true that there are more vulnerabilities and breaches than ever, we also have more products, more vulnerability researchers, and a more systematic process for reporting security issues. To some extent, bug bounties also raise interest and awareness on security vulnerabilities. So, while we might have more vulnerabilities, we are perhaps improving our security state compared to where we were previously when we were very unaware of some bugs – even though they existed.

Concerning disclosure, Graeme has seen trends shift from full disclosure to responsible disclosure. He doesn't like either of them and asked the audience, to whom is responsible disclosure being responsible to? He finds responsible disclosure favors the vendors, while jeopardizing the security of customers or citizens. Instead, he proposes a new scheme, which he calls the artistic disclosure, where the vulnerability is disclosed in an artistic way and the sent out publicly. The art which explicits the vulnerability is meant to cause the public to emotionally react to what they see, and experience something. His point is that, so far, the masses do not feel concerned by vulnerabilities and don't have a clue about what Wannacry, Meltdown, or Spectre are. He doesn't need them to understand the technical aspects of a vulnerability, but instead to feel concerned, touched by what is happening. He tested this sort of disclosure over vulnerabilities on Snapchat for instance, with the exceptional result of having the bug fixed over night.

The artistic disclosure is sent publicly to users, customers, lists, etc. Graeme, however, admits there are risks, such as being sued, and therefore advises anyone doing this to post anonymously.

Anonymous disclosure is not an acceptable solution for security companies and researchers who find vulnerabilities: if researchers are not given any credit for their research, what do they get in return? On the other hand, I personally like the idea of illustrating a vulnerability with some art: it probably makes the issue more understandable and more visibile to the masses. In fact, many researchers have already started doing so, more or less, as you may have noticed that most major vulnerabilities now come with their own logos (heartbleed, spectre, etc.).

Robotnikoff at Troopers: Robots, security and privacy, Brittany Postnikoff

Brittany discussed the social acceptance of robots in our lives, and their impact on our security and privacy. She finds that people often like robots and will accept strange behaviours from them that they would not accept from human beings. As a matter of fact, there are efforts underway to make robot "faces" more friendly to us humans, so that we interact with and trust them more easily.

For instance, she once used a wifi vacuum cleaner robot to enter a number of restricted areas. She simply put a box of candies on the robot, directed the robot to the restricted room, and had the robot knock on the door. Someone would open the door, see the robot, notice the candies, and let it in without a second thought. What they perhaps did not realize was that the robot also had an embedded camera and microphone and was able to spy very efficiently.

She also pointed out that we like our robots so much that we often name them. Once, she witnessed a "robot suicide," where the robot intentionally headed for the staircase. What if the same happened to a robot used as a companion for people who need mental support? Robots are often used by people who are disabled, with kids with autism, etc. If a robot’s behaviour suddenly becomes irrational, suicidal, or something else, could this pose serious problems for their owners?

Brittany also noted that every robot she has ever worked on had many vulnerabilities, and that some of these robots were unpatchable (if you patched them and then reset, they'd go back to their default vulnerable system), and sometimes using software as much as 10 years old.

This was an interesting and entertaining talk about our relationship with robots. If you're interested by in the topic, have a look at her talk at ShmooCon. On a similar topic, check Dennis Giese and DanielAW's work on the Xiaomi vacuum robot.

-- the Crypto Girl, in Heidelberg

Join the Discussion