Tracking Iguanas with Drones Equipped with Software Defined Radios

Our scientific collaborators at the San Diego Zoo Wildlife Alliance have a long running research program studying the behaviors of endangered iguanas in the Caribbean. As part of their efforts to understand these animals, they attach tiny radios to the iguanas and attempt to track them over weeks to months. In the past, this has largely relied on humans equipped with directional antennas traversing rough terrain to find these radios and the iguanas attached to them.

Our Engineers for Exploration researchers felt we could do better. Over the years, we have developed a drone equipped with a software defined radio to fly over an area and find the animals. The software defined radio “listens” for the radios attached to the iguanas, and captures characteristics of each radio’s signal. We have developed automated algorithms that analyze the received data from the drone’s radio to provide an estimate about the location of the iguanas. The algorithm fuses together position estimates from different times and locations. Our field deployments over that past several years have shown that our drone-based system can effectively find radio-tagged animals.

This research was recently published in the Journal of Field Robotics. For more details, please see our paper below. Congrats to all the authors!

Nathan T. Hu, Eric K. Lo, Jen B. Moss, Glenn P. Gerber, Mark E. Welch, Ryan Kastner, and Curt Schurgers, “A More Precise Way to Localize Animals Using Drones“, Journal of Field Robotics, 2021 (pdf)

S2N2: A FPGA Accelerator for Streaming Spiking Neural Networks

Spiking Neural Networks (SNNs) utilize an event-based representation to perform more efficient computation than existing artificial neural networks. SNNs show a lot of promise for low energy computation, but are still limited by the lack of quality training tools and efficient hardware implementations.

Our recent work published at the ACM/IEEE International Symposium of Field-Programmable Gate Arrays (ISFPGA) extends the Xilinx FINN architecture to support streaming spiking neural networks (S2N2). S2N2 efficiently supports both axonal and synaptic delays for feedforward networks with interlayer connections. We show that because of the spikes’ binary nature, a binary tensor can be used for addressing the input events of a layer. We show that S2N2 works well for automatic modulation classification — an important problem for modern wireless networks.

The work was done in collaboration with Xilinx. For more details, check out Ali’s talk at ISFPGA

Paper Reference: Alireza Khodamoradi, Kristof Denolf, and Ryan Kastner, “S2N2: A FPGA Accelerator for Streaming Spiking Neural Network“, International Symposium on Field-Programmable Gate Arrays (ISFPGA) (pdf)

Two New(ish) Group Members

An extremely belated but enthusiastic welcome Olivia Weng and Jennifer Switzer — two PhD students that joined our group in Fall 2020.

Olivia Weng joins us from the University of Chicago where she got her BS in Computer Science. As an undergraduate, her research with Prof. Andrew Chien (formerly a UCSD professor) studied the use of machine learning techniques to optimize operating system requests.

Jennifer Switzer got an MEng and BS from MIT. Her Masters thesis looked at vulnerabilities that arise when “safe” processes written in Rust interact in potentially unsafe manners through inter-process communication.

Welcome Liv and Jen!

Distinguished Lecture & Tutorial on Property Driven Hardware Security

December 2020 involved a couple of major events related to our hardware security research — a HOST Tutorial and a CASA Distinguished Lecture. Ryan and Dr. Nicole Fern from Tortuga Logic gave a tutorial at IEEE International Symposium on Hardware Oriented Security and Trust (HOST) HOST 2020. Ryan was also invited to give a Distinguished Lecture in the CASA Cluster of Excellence at Ruhr University Bochum. Both events focused on our work on Property Driven Hardware Security.

Property driven hardware security is a design methodology to assess the safety and security of hardware designs. It enables security experts to describe how the hardware should (or should not) function. These security properties are formally specified using languages that map to models that are easy to verify using existing design tools. There are three fundamental elements for any hardware security design flow. First, security experts need expressive languages to specify these security properties. Second, these properties should map to models to describe the security related behavior of a hardware design. Finally, hardware security design tools verify that the hardware design meets these properties using formal solvers, simulation, and emulation.

The HOST tutorial was one of six selected to provide HOST attendees with an in-depth look at important topics in hardware security. I gave a similar tutorial in the last HOST that was well-received and invited back for another year. This time around, the tutorial included Dr. Nicole Fern from Tortuga Logic. Nicole provided a great presentation on the types of properties that modern hardware security verification tools can handle. I added an in-depth look about how these tools can verify security properties. Have a look yourself at the materials made available to the attendees if you would like.

The Distinguished Lecture was a great honor for me. I really admire the research done in CASA Cluster of Excellence — they have an outstanding group of researchers that I have followed for many years (even decades). This invitation did lead me to consider what one needs to do in order to be eligible to give a distinguished lecture. My conclusion is that one mostly just needs to be a researcher for a long enough time and then their work becomes distinguished. And that made me feel a bit old. So before my talk I made sure to shave and pluck out grey hairs. The folks at CASA did a nice job of producing a video of the talk:

X-Ray Vision: Enhancing Liver Surgery with Augmented Reality

Liver cancer has the fastest growth of incidence and the second highest mortality of all cancers in the United States. Worldwide, it is estimated that over one million people will die from liver cancer in 2030. Liver resection (hepatectomy) is the paradigm for treating liver cancer. A crucial part of a partial hepatectomy is understanding where the tumors, vessels, and other important landmarks are located. To aid in this, the patient typically undergoes preoperative cross-sectional imaging (e.g., CT/MR scans). Surgeons use these images to determine resectability based upon the location of important structures (e.g., veins), analyze tumor margins, accurately compute future liver remnant volumes, and generally aid in surgical planning and navigation.

An augmented reality image guidance system for enhancing liver surgery.

However, it is challenging for the surgeon to mentally register preoperative cross-sectional images to the surface of the liver at the time of operation since surgical actions cause significant and sometimes permanent liver deformations that lead to mismatches with cross-sectional images. Mentally integrating preoperative data into the operative field is time consuming and error prone. This can make it difficult to accurately localize smaller tumors intra-operatively, which can affect surgical decision making and adequate resection of primary and metastatic liver tumors.

Dr. Michael Barrow‘s PhD thesis developed augmented Reality (AR) image guidance techniques that merge preoperative data directly into the surgeons view during surgery. The goal is to provide surgeons with what Michael describes as “X-ray vision” — allowing them to see through tissues and better understand where blood vessels, tumors, and other important surgical landmarks lie.

Current scenario: The surgeon has to estimate internal vessel positions
X-Ray vision: the surgeon is presented an AR overlay of internal landmarks.

The research brings together many state-of-the-art technologies. It requires computer vision approaches to track the surgical scene, real-time mechanical modeling of the organ to accurately place the important unseen surgical landmarks, augmented reality to visualize the landmarks, and hardware accelerated compute systems to process the high throughput sensor data. He showed that patient specific biomechanical modeling results in clinically significant increases in accuracy. Specifically, he built a system that uses magnetic resonance elastography to create a patient-specific mechanical model. The system works in real-time to provide accurate positions of unknown landmarks. He physically validated the techniques by creating a phantom mechanical platform to demonstrate it is possible to track landmarks internal to the phantom liver.

Left: Overview of the complete AR surgical system. Right: Experimental platform used to validate the AR accuracy

Michael took an unconventional path to his PhD. Unlike most PhDs, he laid out his research topic almost solely on his own. He spent a lot of time shadowing medical doctors to understand their problems. He deftly maneuvered through many different fields, seeking out and finding key collaborators. The result is an amazing example of an interdisciplinary thesis that has tremendous potential value in a clinical setting.

Michael developed a number of other technologies that are not reflected in his thesis. Most recently he is focusing on developing technologies to help into COVID-19 crisis which was awarded an UCSD Institute of Engineering in Medicine Galvanizing Engineering in Medicine award. He lead a team of undergraduates to build systems to better scale the care of COVID-19 patients (for more information see CSE Research Highlight).

Michael was a real tour de force in pushing collaborations between the School of Engineering and the School of Medicine. In addition to his Phd thesis project, he developed a close collaboration with Dr. Shanglei Liu and made many other connections between our research group and the medical school that will certainly create more future fruitful collaborations.

After graduation, Michael started a post-doctoral position at Lawrence Livermore National Labs.

Ancient China from Above

Our large-scale 3D modeling work was featured on the National Geographic Docuseries “Ancient China from Above“. We developed 3D models using drones, multispectral cameras, lidar, and other cutting edge technology, which provided archaeologist Alan Maca new insights to better understand ancient Chinese civilizations.

As part of the production, Ryan and Eric Lo traveled to some of the most remote parts of China including Xanadu (Kublai Khan’s summer palace in Inner Mongolia), the Han Great Wall in the Gobi Desert, and Shimao — a new archaeological site known as China’s Pompeii located on the Loess plateau.

The three part series aired on the National Geographic Channel and is available on most video streaming platforms. The full “Secrets of the Great Wall” episode is available on YouTube. We appear around the 26 minute mark.

Sketching Secure Hardware

Hardware security-related attacks are growing in number and their severity. Spectre, Meltdown, Foreshadow, Fallout, ZombieLoad, and Starbleed are just a few of the many recent attacks that exploit hardware vulnerabilities. While vulnerabilities are seemingly easy to find, designing secure hardware is challenging (to say the least) and there are limited tools to aid this process.

Armita Ardeshiricham’s PhD thesis made pioneering and fundamental contributions in detecting, localizing, and repairing hardware vulnerabilities. Her thesis developed verification tools that quickly finds vulnerabilities that previous work could not. And it laid the foundation for automated debugging of those flaws.

Her early work focused on developing powerful information flow tracking (IFT) tools that that work at the register transfer level. She extended this work in a fundamentally important manner by formulating IFT logic that detects timing based flows. And she pioneered the idea of sketching for hardware security. The culmination of her PhD research is the VeriSketch framework.

VeriSketch is the first design framework that uses sketching to automatically synthesize secure and functionally-complete hardware design. VeriSketch frees hardware designers from specifying exact cycle-by-cycle behaviors and excruciating bit-level details that often lead to security vulnerabilities. Instead, the designer provides a sketch of the circuit alongside a set of functional and security properties. VeriSketch uses program synthesis techniques to automatically generate a fully-specified design which satisfies these properties. VeriSketch leverages hardware IFT to enable definition and verification of security specifications, which allows for the analysis of a wide variety of security properties related to confidentiality, integrity, and availability.

Armita’s PhD research will undoubtedly have a lasting impact on our group’s hardware security efforts and has laid out a research agenda for the next few years (and likely beyond). Based on her work, we have started projects on error localization (with Prof. Yanjing Li at Univ. of Chicago) and automated property generation (with Prof. Cynthia Sturton at Univ. of North Carolina) that was recently funded by the Semiconductor Research Corporation. Her work was fundamental in developing system on chip access control monitors in collaboration with Leidos and Sant’Anna School of Advanced Studies in Pisa. She will certainly be missed!

Dr. Ardeshiricham currently works at Apple doing things that she can tell no one about (as is typically with Apple). But I’m certain that future Apple devices will be much more secure with her overseeing the verification process.

A very long overdue post and congrats again!


Ryan’s acknowledgment — acting as Mel Gibson to Armita’s Jim Caviezel during her PhD career.

Science and Technology Behind Mangrove Conservation

Did you know that mangroves sequester more carbon than rainforests? In addition to being one of the best carbon scrubbers in the world, they also protect coastlines from erosion and hurricanes and provide an amazing nursery for aquatic life. Yet, these important ecosystems are in-decline worldwide, hurt by industrialization, rising sea levels, and other climatic events.

As part of the activities around World Mangrove Day, Ryan moderated an online panel “The Science Behind Remote Sensing” related to using technology to monitor and rehabilitate mangroves. The panel featured researchers from NASA, Microsoft, UCSD, and the Nature Conservancy are using drones, satellites, multispectral imaging, machine learning, and a bunch of other technologies to understand and rehabilitate mangroves around the world. Our collaborator Astrid Hsu presented some of the technologies that we are working on as part of Engineers for Exploration program. And there was a lot of interesting discussion on how to use technology to monitor, understand, and rehabilitate these important ecosystems.

Low-cost 3D Scanning Systems for Cultural Heritage Documentation

Digitally documenting archaeological sites provides high-resolution 3D models that are more accurate than traditional analog (manual) recordings. Capturing the 3D data comes at great financial cost (if using a lidar-based system) or be time-consuming during data collection and post-processing (when using photogrammetry). This has limited the use of these techniques in the field.

Depth sensors like the Microsoft Kinect and Intel RealSense provide relative low-cost way of capturing depth data. Open-source 3D mapping software provides fast and accurate algorithms to turn this depth data into 3D models. Our research combines depth sensors and 3D mapping algorithms to develop a low-cost 3D scanning system. We analyzed multiple sensors and software packages to develop a prototype system to create large scale 3D model of tunneling-based archaeological site. We used this system to document Maya archaeological site El Zotz in the Peten region of Guatemala. Our findings were recently published in the paper “Low-cost 3D scanning systems for cultural heritage documentation” in the Journal of Cultural Heritage Management and Sustainable Development.

This research is the result of a many year (and on-going) effort between Engineers for Exploration and archaeologists at El Zotz. Congrats to all those involved in this impressive project.

Real-time Automatic Modulation Classification

Advanced wireless communication techniques, like those found in 5G and beyond, require low latency while operating on high throughput streams of radio frequency (RF) data. Automatic Modulation Classification is one important method to understand how other radios are using the wireless channel. This information can be used in applications such as cognitive radios to better utilize the wireless channel and transmit information at faster rates.

Our recent work shows how to perform modulation classification in real-time by exploiting the RF capabilities offered by Xilinx RFSoC platforms. This work, lead by the University of Sydney Computer Engineering Lab, developed a non-uniform and layer-wise quantization technique to shrink the large memory footprint of neural networks to fit on the FPGA fabric. This technique preserves the classification accuracy in a real-time implementation.
This work was published at the Reconfigurable Architectures Workshop (RAW) and an open source implementation on Xilinx RFSoC ZCU111 development board is available at in the github repo.