Skip to main content

Hackers Can Shine Lasers at Your Alexa Device and Do Bad, Bad Things to It

Hackers Can Shine Lasers at Your Alexa Device and Do Bad, Bad Things to It

Move the Echo away from the window. Now.
  • new paper funded by DARPA and a Japanese organization for the promotion of science and technology find that simple lasers can basically hack into voice-controlled assistants.
  • Researchers are able to use lasers to inject malicious commands into smart devices, even remotely starting a victim's car if it's connected through a Google account.
  • To be safe, keep your voice devices away from windows in your home.

    Keep Alexa away from all windows: Turns out hackers can shine lasers at your Google Assistant or Amazon Alexa-enabled devices and gain control of them, sending commands to the smart assistants or obtaining your valuable account information.
    Researchers proved this by using lasers to inject malicious commands into voice-controlled devices like smart speakers, tablets, and phones across long distances through glass windowpanes.
    In a new paper, "Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems," the authors describe the laser-based vulnerability as a "signal injection attack" on microphones based on the photoacoustic effect, which converts light to sound through a microphone. From the abstract:
    We show how an attacker can inject arbitrary audio signals to the target microphone by aiming an amplitude-modulated light at the microphone’s aperture. We then proceed to show how this effect leads to a remote voice-command injection attack on voice-controllable systems. Examining various products that use Amazon’s Alexa, Apple’s Siri, Facebook’s Portal, and Google Assistant, we show how to use light to obtain full control over these devices at distances up to 110 meters and from two separate buildings.
    User authentication is lacking or nonexistent on voice assistant devices, the researchers also found. That meant they could use the light-injected voice commands to unlock the victim's smart lock-enabled doors, shop on their e-commerce sites, use their payment methods, or even unlock and start vehicles connected to the victim's Google account.
    The research—funded by the Japan Society for the Promotion of Science and the U.S. Defense Advanced Research Projects Agency (DARPA), among other organizations—was meant to uncover possible security vulnerabilities in Internet of Things devices.
    "While much attention is being given to improving the capabilities of [voice controlled] systems, much less is known about the resilience of these systems to software and hardware attacks," the authors write in the paper. "Indeed, previous works already highlight a major limitation of voice-only user interaction: the lack of proper user authentication."
    image
    LIGHT COMMANDS
    So how does it work? The mics can convert sound into electrical signals, sure, but they also react to light aimed directly at them. By using a high-intensity light beam, modulating an electrical signal, the researchers could trick the microphones embedded in the voice devices into producing electrical signals as if they were receiving real audio from your voice, for example. That means hackers can then inject inaudible commands through the light beams.
    Devices are vulnerable from up to 110 meters away, as of the time of the paper's publication. That's about the length of a football field.
    Pretty much any voice-enabled device you can imagine is vulnerable to this kind of attack, but the authors have tested and confirmed vulnerabilities in the following:
    image
    LIGHT COMMANDS
    Perhaps the most disturbing part is how easy this kind of hack is. All you need, according to the research group, are a simple laser pointer, a laser diode driver (which keeps a constant current of power supply to the laser), a sound amplifier, and a telephoto lens to focus the laser better for long-range attacks.
    While this research and the associated hacks were conducted by professional researchers, criminals could do the same (though they found no instances of this happening maliciously ... yet). However, with the methodology out there now, it's a double-edged sword: Yes, some consumers may read about this vulnerability and prepare against it, but now hackers also have new ideas.
    It's pretty difficult to tell if you're being attacked this way, but users may notice the light beam's reflection on the device in question or a user can try to monitor the device's verbal response and light pattern changes. If you notice your device acting erratically, more or less, unplug the damn thing and move it away from the window.
    The good news is that there are ways to protect yourself from a laser-based attack, according to the paper:
    An additional layer of authentication can be effective at somewhat mitigating the attack. Alternatively, in case the attacker cannot eavesdrop on the device's response, having the device ask the user a simple randomized question before command execution can be an effective way at preventing the attacker from obtaining successful command execution.
    Manufacturers can also attempt to use sensor fusion techniques, such as acquire audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands.
    Another approach consists in reducing the amount of light reaching the microphone's diaphragm using a barrier that physically blocks straight light beams for eliminating the line of sight to the diaphragm, or implement a non-transparent cover on top of the microphone hole for attenuating the amount of light hitting the microphone. However, we note that such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to compensate for the cover-induced attenuation or for burning through the barriers, creating a new light path.

    Comments

    Popular posts from this blog

    The Xbox Adaptive Controller

    The Xbox Adaptive Controller What is it? The Xbox Adaptive Controller is a video game controller designed by Microsoft for Windows PCs and the Xbox One video game console. The controller was designed for people with disabilities to help make user input for video games more accessible. Cool isn't it? What Does it do? Designed primarily to meet the needs of gamers with limited mobility, the  Xbox Adaptive Controller is  a unified hub for devices that help make gaming more accessible. Connect external devices such as switches, buttons, mounts, and joysticks to create a custom  controller  experience that  is  uniquely yours. Compatibility You can play it on Xbox consoles and Windows 10 PCs with familiar features such as Xbox Wireless, Bluetooth, USB connectivity, Copilot, and a 3.5mm stereo headset jack. Click here to watch the trailer

    MIT Little Robot Dog

    It’s that time of year again — fall is here and packs of robot dogs are frolicking in the leaves Just listen to the scuttle of those tiny metal legs By   James Vincent   Share this story re this on Twitter (opens in new window) All sharing options There’s nothing I like more on bright and cold autumnal days than heading down to the park and watching the robot dogs playing in piles of leaves. To hear the scuttle of their little metal legs! To imagine the joy in their tiny silicon brains! Ah, what bliss. If you’ve not experienced these delights before, then the video above from MIT’s biomimetics lab will give you the basic idea. The bots you can see are the university’s Mini Cheetah: a lightweight and modular quadruped that’s been under development for years. We saw the Mini Cheetah earlier in 2019 when it  learned to backflip , but the biomimetics lab has obviously cranked up production and now has at least nine of these littl...