Ancient China from Above

Our large-scale 3D modeling work was featured on the National Geographic Docuseries “Ancient China from Above“. We developed 3D models using drones, multispectral cameras, lidar, and other cutting edge technology, which provided archaeologist Alan Maca new insights to better understand ancient Chinese civilizations.

As part of the production, Ryan and Eric Lo traveled to some of the most remote parts of China including Xanadu (Kublai Khan’s summer palace in Inner Mongolia), the Han Great Wall in the Gobi Desert, and Shimao — a new archaeological site known as China’s Pompeii located on the Loess plateau.

The three part series aired on the National Geographic Channel and is available on most video streaming platforms. The full “Secrets of the Great Wall” episode is available on YouTube. We appear around the 26 minute mark.

Science and Technology Behind Mangrove Conservation

Did you know that mangroves sequester more carbon than rainforests? In addition to being one of the best carbon scrubbers in the world, they also protect coastlines from erosion and hurricanes and provide an amazing nursery for aquatic life. Yet, these important ecosystems are in-decline worldwide, hurt by industrialization, rising sea levels, and other climatic events.

As part of the activities around World Mangrove Day, Ryan moderated an online panel “The Science Behind Remote Sensing” related to using technology to monitor and rehabilitate mangroves. The panel featured researchers from NASA, Microsoft, UCSD, and the Nature Conservancy are using drones, satellites, multispectral imaging, machine learning, and a bunch of other technologies to understand and rehabilitate mangroves around the world. Our collaborator Astrid Hsu presented some of the technologies that we are working on as part of Engineers for Exploration program. And there was a lot of interesting discussion on how to use technology to monitor, understand, and rehabilitate these important ecosystems.

Low-cost 3D Scanning Systems for Cultural Heritage Documentation

Digitally documenting archaeological sites provides high-resolution 3D models that are more accurate than traditional analog (manual) recordings. Capturing the 3D data comes at great financial cost (if using a lidar-based system) or be time-consuming during data collection and post-processing (when using photogrammetry). This has limited the use of these techniques in the field.

Depth sensors like the Microsoft Kinect and Intel RealSense provide relative low-cost way of capturing depth data. Open-source 3D mapping software provides fast and accurate algorithms to turn this depth data into 3D models. Our research combines depth sensors and 3D mapping algorithms to develop a low-cost 3D scanning system. We analyzed multiple sensors and software packages to develop a prototype system to create large scale 3D model of tunneling-based archaeological site. We used this system to document Maya archaeological site El Zotz in the Peten region of Guatemala. Our findings were recently published in the paper “Low-cost 3D scanning systems for cultural heritage documentation” in the Journal of Cultural Heritage Management and Sustainable Development.

This research is the result of a many year (and on-going) effort between Engineers for Exploration and archaeologists at El Zotz. Congrats to all those involved in this impressive project.

Real-time Automatic Modulation Classification

Advanced wireless communication techniques, like those found in 5G and beyond, require low latency while operating on high throughput streams of radio frequency (RF) data. Automatic Modulation Classification is one important method to understand how other radios are using the wireless channel. This information can be used in applications such as cognitive radios to better utilize the wireless channel and transmit information at faster rates.

Our recent work shows how to perform modulation classification in real-time by exploiting the RF capabilities offered by Xilinx RFSoC platforms. This work, lead by the University of Sydney Computer Engineering Lab, developed a non-uniform and layer-wise quantization technique to shrink the large memory footprint of neural networks to fit on the FPGA fabric. This technique preserves the classification accuracy in a real-time implementation.
This work was published at the Reconfigurable Architectures Workshop (RAW) and an open source implementation on Xilinx RFSoC ZCU111 development board is available at in the github repo.

Jeremy Blackstone Named Bouchet Scholar

The Edward Alexander Bouchet Graduate Honor Society is a network of preeminent scholars who exemplify academic and personal excellence, foster environments of support, and serve as examples of scholarship, leadership, character, service, and advocacy for students who have been traditionally underrepresented in the academy. 

Jeremy will fit in perfectly. His research on hardware security is exploring new ways to mitigate side channel attacks. It has resulted in several research papers in top venues. His leadership, service, and advocacy are evident during his time as an undergraduate at Howard University and throughout his PhD career at UCSD. A small sampling of this includes tutoring and mentoring elementary, high school, and undergraduate students, many of which have come from underrepresented groups. He served as President of Jacob’s Graduate Student Council where he helped organize events for engineering students to present their research with their peers to get feedback for future presentations and to young students to inspire them to pursue engineering.

Edward Bouchet was the first African American doctoral recipient in the United States. He entered Yale College (now Yale University) in 1870. He graduated in 1874 and decided to stay on a couple more years to get his PhD in Physics. Despite an impressive academic record (he got a PhD in two years!), he was unable to land a position in a college or university due to his race. He taught chemistry and physics at the School for Colored Youth in Philadelphia for more than 25 years; it was one of the few institutions that offered African Americans a rigorous academic program. 

CSE Research Open House

Our research was represented prominently at the CSE Research Open House, held on January 31, 2020. The open house provides an opportunity for industry, alumni, and broader UCSD community to get a view of the research going on in our department. It consisted of research talks in the morning, demos in the afternoon, a research poster session, and awards ceremony.

Arden Ma and Dillon Hicks showing off some of the mangrove monitoring technology.

Engineers for Exploration (E4E) described their latest and greatest technologies during the sustainable computing session in the morning and showed off demos in the afternoon. The featured E4E projects included our project to document Maya archaeology sites where we use state of the art imaging sensors to create large scale 3D models of archaeological site. This is then viewable in virtual reality. Another featured project is mangrove monitoring which uses drones, multispectral image sensors, machine learning for automated ecosystem classification, and other new technologies to document and understand these fragile and important ecosystems. The radio collar tracker was the final presented project. The goal is to track animals equipped with radio transmitters using drones and software defined radio. Here’s the presentation if you want more detail on these projects and the E4E program.

Michael and his fancy best poster award.

Michael Barrow was awarded the best poster for his research on “Data Driven Tissue Models for Surgical Image Guidance“. Michael leads this multidisciplinary project that spans across the Jacobs School of Engineering and the School of Medicine. The goal is to develop more accurate modeling and visualization of tumors, blood vessels, and other important landmarks to provide surgeons real-time feedback during the operation.

Finally, our close collaborator Tim Sherwood was honored with a CSE Distinguished Alumni awards. We have been working with Tim for almost two decades (pretty much since the time he graduated from UCSD) Our research includes a number of fundamental projects in hardware security including some of the initial work in FPGA security, 3D integrated circuit security, hardware information flow tracking, and computational blinking.

VeriSketch – Automating Secure Hardware Design

While it took much, much longer than it should have, the semiconductor industry is starting to realize that security is a critical part of the design process. Spector, Meltdown, and other high profile hardware security flaws have shown the danger of ignoring security during the design and verification process. Intel, Xilinx, Qualcomm, Broadcom, NXP and other large semiconductor companies have large and growing security teams that perform audits for their chips to help find and then mitigate security flaws.

Emerging hardware security verification tools (including those spun out of our research group from Tortuga Logic) help find potential security flaws. They are powerful at detecting flaws that violate specified security properties and providing example behaviors on how to exploit the flaw. Unfortunately, fixing these flaws remains a manual process, which is time consuming and often left without a viable solution.

VeriSketch takes a first step at automatically fixing the bugs found by the hardware verification tools. VeriSketch asks the designer to partially specify the hardware design, and then uses formal techniques to automatically fill in the sketch to create a design that is guaranteed to be devoid of the flaw. It leverages program synthesis, which automatically constructs programs that fit given specifications. VeriSketch introduces program synthesis into the hardware design world and uses security and functional constraints for the specification. This allows hardware designers to leave out details related to the control (e.g., partially constructed finite state machines) and datapath (e.g., incomplete logic constructs). VeriSketch uses formal methods to automatically derive these unknown parts of the hardware such that they meet the security constraints.

As a proof of concept, we used hardware security verification tools to show that PLCache (a well known cache that is supposedly resilient to cache side channel) does indeed have a flaw through its meta-data (more specifically the LRU bit). And we were able to use VeriSketch to automatically augment the PLCache design to remove this flaw.

More details on VeriSketch, the PLCache flaw, and other interesting hardware security verification techniques are detailed in our paper “VeriSketch: Synthesizing Secure Hardware Designs with Timing-Sensitive Information Flow Properties” presented at the ACM Conference on Computer and Communications Security. Congrats to the authors Armaiti Ardeshiricham, Yoshiki Takashima, Sicun Gao, and Ryan Kastner!

Underwater Localization Research Garners JASA Top Pick

Our research that shows it is possible to determine the position of underwater vehicles using only ambient ocean sounds was selected as a top pick in the signal processing technical area for the Journal of Acoustical Society of America (JASA). Our algorithm provides a position estimate for underwater vehicles using ambient acoustic ocean noise as recorded by a single hydrophone onboard each vehicle.

To test our positioning algorithm, we deployed eight underwater vehicles off the coast of San Diego (shown in the red box in the figure). The vehicles are programmed to keep a depth of 6m, but otherwise drift with the ocean currents. The positions derived using only ambient underwater noise was compared with those calculated using an array of acoustic pingers (shown by green diamonds). While the vehicles were drifting, a boat circled the drifting vehicles twice (once at approximately 11 m/s and once at approximately 4 m/s); the trajectory of the boat is shown with the start and end positions indicated. The right panel shows a close up of the AUE trajectories where the red bounding box matches the box on the left panel. Deployment times for both the boat and AUE trajectories are shown by the colorbar on the right. Our position estimates using only the underwater microphones are comparable to the much more complex, difficult to deploy, nad costly localization infrastructure that uses the five buoys.

Our techniques that enable low-cost positioning of underwater vehicles have been documented before. In our previous work, we have shown how to use snapping shrimp for underwater vehicle localization. Here we show how to use other other naturally occurring ocean sounds to perform localization. All of this work, was lead by Dr. Perry Naughton and written up in his PhD thesis.

Real-time Dense SLAM

Quentin receiving the Best Paper Award

Simultaneous localization and mapping (SLAM) is a common technique for robotic navigation, 3D modeling, and virtual/augmented reality. It employs a compute-intensive process that fuses images from a camera into a virtual 3D space. It does this by looking for common features (edges, corners, and other distinguishing image landmarks) over time and uses those to infer the position of the camera (i.e., localization). At the same time, it creates a virtual 3D map of the environment (i.e., mapping). Since this is a computationally expensive task, real-time applications typically can only create a crude or sparse 3D maps.

To address this problem, we built a custom computing system that can create dense 3D maps in real-time. Our system using an FPGA as a base processor. With careful design, it is possible to use an FPGA that it very efficient. “Careful design” typically equates to a painstaking, time consuming manual process to specify the low level architectural details needed to obtain efficient computation.

Our paper “FPGA Architectures for Real-time Dense SLAM” (Quentin Gautier, Alric Althoff, and Ryan Kastner) describes the results of our system design. This was recently published, presented, and awarded the best paper at the IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP). The paper details the different parts of the InfinitiTAM algorithm, describes the best techniques for accelerating the algorithms, and presents two open-source complete end-to-end system implementations for the SLAM algorithms. The first targets a lower power and portable programmable SoC (Terasic DE1) and the other a more powerful desktop solution (Terasic DE5).

We have found that many SLAM algorithms barely run out of the box. And it is even more challenging to get a hardware accelerated version working. We hope that our open-source repository will be valuable to the broader community and enable them to develop even more powerful solutions.

Links:
Dense SLAM for FPGAs Repository and Technical Paper.

Viva Las Vegas

As part of this year’s Design Automation Conference, I participated on the panel “Architecture, IP, or CAD: What’s Your Pick for SoC Security?”. That’s a bunch of acronyms and buzzwords related to the question of how to build more secure computer chips. DAC is one of the oldest, largest and most prestigious conferences in electronics design. It was also the first big research conference that I attended; I went to DAC in New Orleans in 1999 as an undergraduate (which was an eye opening experience in many regards), so I guess this was my 20th DAC anniversary.

I’ve been doing research in the hardware security space for a while now (more than 15 years!). I’ve seen this community grow from a niche academic community into a major focus at DAC (there were security sessions almost non-stop this year). And it was nice to see more hardware security companies on the floor including the amazing Tortuga Logic (full disclosure: I am a co-founder). Security clearly has become a major research and market push for the semiconductor and EDA industries.

I was the “academic” on this panel with two folks from industry — Eric Peeters from Texas Instruments and Yervant Zorian from Synopsys. Serge Leef from DARPA was the other panelist. Serge just went to DARPA from Mentor Graphics and is looking to spend a lot of our taxpayers money on hardware security. A very wise investment in my totally impartial opinion. I’m guessing that most of the audience was there to hear what Serge had to say and to see if any money fell out of his pockets as he left the room.

The panel started with short (5 min) presentations from each panelist and then there was a lot of time for Q&A from the moderator (the great Swarup Bhunia) and the audience.

My presentation talking points focused on how academics, industry, and government should interact in this space. My answer: industry and government should give lots of funding for academic research (again, I’m totally not biased here…). I also argued that there really isn’t all that much interesting research left in hardware IP security, which I defined as Trojans, PUFs, obfuscation, and locking. Finally, I gave some research areas that are more interesting for research including formalizing threat models and figuring out how to debug hardware security vulnerabilities. Both are no small tasks, and my research group is making strides in both.

During the open discussion there were many other interesting points related to industry’s main interests (root of trust, not Trojans, …), the number of hardware vulnerabilities there are in the wild, metrics, hardware security lifecycle, and so on.

It was a quick visit Vegas (~1 day), but you brought back some good memories, gave me some great food, and didn’t take too much of my money. All and all, a successful trip.

-Ryan