Spiking Neural Networks (SNNs) utilize an event-based representation to perform more efficient computation than existing artificial neural networks. SNNs show a lot of promise for low energy computation, but are still limited by the lack of quality training tools and efficient hardware implementations.
Our recent work published at the ACM/IEEE International Symposium of Field-Programmable Gate Arrays (ISFPGA) extends the Xilinx FINN architecture to support streaming spiking neural networks (S2N2). S2N2 efficiently supports both axonal and synaptic delays for feedforward networks with interlayer connections. We show that because of the spikes’ binary nature, a binary tensor can be used for addressing the input events of a layer. We show that S2N2 works well for automatic modulation classification — an important problem for modern wireless networks.
The work was done in collaboration with Xilinx. For more details, check out Ali’s talk at ISFPGA
Paper Reference: Alireza Khodamoradi, Kristof Denolf, and Ryan Kastner, “S2N2: A FPGA Accelerator for Streaming Spiking Neural Network“, International Symposium on Field-Programmable Gate Arrays (ISFPGA) (pdf)
Our large-scale 3D modeling work was featured on the National Geographic Docuseries “Ancient China from Above“. We developed 3D models using drones, multispectral cameras, lidar, and other cutting edge technology, which provided archaeologist Alan Maca new insights to better understand ancient Chinese civilizations.
As part of the production, Ryan and Eric Lo traveled to some of the most remote parts of China including Xanadu (Kublai Khan’s summer palace in Inner Mongolia), the Han Great Wall in the Gobi Desert, and Shimao — a new archaeological site known as China’s Pompeii located on the Loess plateau.
The three part series aired on the National Geographic Channel and is available on most video streaming platforms. The full “Secrets of the Great Wall” episode is available on YouTube. We appear around the 26 minute mark.
Did you know that mangroves sequester more carbon than rainforests? In addition to being one of the best carbon scrubbers in the world, they also protect coastlines from erosion and hurricanes and provide an amazing nursery for aquatic life. Yet, these important ecosystems are in-decline worldwide, hurt by industrialization, rising sea levels, and other climatic events.
As part of the activities around World Mangrove Day, Ryan moderated an online panel “The Science Behind Remote Sensing” related to using technology to monitor and rehabilitate mangroves. The panel featured researchers from NASA, Microsoft, UCSD, and the Nature Conservancy are using drones, satellites, multispectral imaging, machine learning, and a bunch of other technologies to understand and rehabilitate mangroves around the world. Our collaborator Astrid Hsu presented some of the technologies that we are working on as part of Engineers for Exploration program. And there was a lot of interesting discussion on how to use technology to monitor, understand, and rehabilitate these important ecosystems.
Digitally documenting archaeological sites provides high-resolution 3D models that are more accurate than traditional analog (manual) recordings. Capturing the 3D data comes at great financial cost (if using a lidar-based system) or be time-consuming during data collection and post-processing (when using photogrammetry). This has limited the use of these techniques in the field.
Depth sensors like the Microsoft Kinect and Intel RealSense provide relative low-cost way of capturing depth data. Open-source 3D mapping software provides fast and accurate algorithms to turn this depth data into 3D models. Our research combines depth sensors and 3D mapping algorithms to develop a low-cost 3D scanning system. We analyzed multiple sensors and software packages to develop a prototype system to create large scale 3D model of tunneling-based archaeological site. We used this system to document Maya archaeological site El Zotz in the Peten region of Guatemala. Our findings were recently published in the paper “Low-cost 3D scanning systems for cultural heritage documentation” in the Journal of Cultural Heritage Management and Sustainable Development.
This research is the result of a many year (and on-going) effort between Engineers for Exploration and archaeologists at El Zotz. Congrats to all those involved in this impressive project.
Advanced wireless communication techniques, like those found in 5G and beyond, require low latency while operating on high throughput streams of radio frequency (RF) data. Automatic Modulation Classification is one important method to understand how other radios are using the wireless channel. This information can be used in applications such as cognitive radios to better utilize the wireless channel and transmit information at faster rates.
The Edward Alexander Bouchet Graduate Honor Society is a network of preeminent scholars who exemplify academic and personal excellence, foster environments of support, and serve as examples of scholarship, leadership, character, service, and advocacy for students who have been traditionally underrepresented in the academy.
Jeremy will fit in perfectly. His research on hardware security is exploring new ways to mitigate side channel attacks. It has resulted in several research papers in top venues. His leadership, service, and advocacy are evident during his time as an undergraduate at Howard University and throughout his PhD career at UCSD. A small sampling of this includes tutoring and mentoring elementary, high school, and undergraduate students, many of which have come from underrepresented groups. He served as President of Jacob’s Graduate Student Council where he helped organize events for engineering students to present their research with their peers to get feedback for future presentations and to young students to inspire them to pursue engineering.
Edward Bouchet was the first African American doctoral recipient in the United States. He entered Yale College (now Yale University) in 1870. He graduated in 1874 and decided to stay on a couple more years to get his PhD in Physics. Despite an impressive academic record (he got a PhD in two years!), he was unable to land a position in a college or university due to his race. He taught chemistry and physics at the School for Colored Youth in Philadelphia for more than 25 years; it was one of the few institutions that offered African Americans a rigorous academic program.
Our research was represented prominently at the CSE Research Open House, held on January 31, 2020. The open house provides an opportunity for industry, alumni, and broader UCSD community to get a view of the research going on in our department. It consisted of research talks in the morning, demos in the afternoon, a research poster session, and awards ceremony.
Engineers for Exploration (E4E) described their latest and greatest technologies during the sustainable computing session in the morning and showed off demos in the afternoon. The featured E4E projects included our project to document Maya archaeology sites where we use state of the art imaging sensors to create large scale 3D models of archaeological site. This is then viewable in virtual reality. Another featured project is mangrove monitoring which uses drones, multispectral image sensors, machine learning for automated ecosystem classification, and other new technologies to document and understand these fragile and important ecosystems. The radio collar tracker was the final presented project. The goal is to track animals equipped with radio transmitters using drones and software defined radio. Here’s the presentation if you want more detail on these projects and the E4E program.
Michael Barrow was awarded the best poster for his research on “Data Driven Tissue Models for Surgical Image Guidance“. Michael leads this multidisciplinary project that spans across the Jacobs School of Engineering and the School of Medicine. The goal is to develop more accurate modeling and visualization of tumors, blood vessels, and other important landmarks to provide surgeons real-time feedback during the operation.
Finally, our close collaborator Tim Sherwood was honored with a CSE Distinguished Alumni awards. We have been working with Tim for almost two decades (pretty much since the time he graduated from UCSD) Our research includes a number of fundamental projects in hardware security including some of the initial work in FPGA security, 3D integrated circuit security, hardware information flow tracking, and computational blinking.
While it took much, much longer than it should have, the semiconductor industry is starting to realize that security is a critical part of the design process. Spector, Meltdown, and other high profile hardware security flaws have shown the danger of ignoring security during the design and verification process. Intel, Xilinx, Qualcomm, Broadcom, NXP and other large semiconductor companies have large and growing security teams that perform audits for their chips to help find and then mitigate security flaws.
Emerging hardware security verification tools (including those spun out of our research group from Tortuga Logic) help find potential security flaws. They are powerful at detecting flaws that violate specified security properties and providing example behaviors on how to exploit the flaw. Unfortunately, fixing these flaws remains a manual process, which is time consuming and often left without a viable solution.
VeriSketch takes a first step at automatically fixing the bugs found by the hardware verification tools. VeriSketch asks the designer to partially specify the hardware design, and then uses formal techniques to automatically fill in the sketch to create a design that is guaranteed to be devoid of the flaw. It leverages program synthesis, which automatically constructs programs that fit given specifications. VeriSketch introduces program synthesis into the hardware design world and uses security and functional constraints for the specification. This allows hardware designers to leave out details related to the control (e.g., partially constructed finite state machines) and datapath (e.g., incomplete logic constructs). VeriSketch uses formal methods to automatically derive these unknown parts of the hardware such that they meet the security constraints.
As a proof of concept, we used hardware security verification tools to show that PLCache (a well known cache that is supposedly resilient to cache side channel) does indeed have a flaw through its meta-data (more specifically the LRU bit). And we were able to use VeriSketch to automatically augment the PLCache design to remove this flaw.
To test our positioning algorithm, we deployed eight underwater vehicles off the coast of San Diego (shown in the red box in the figure). The vehicles are programmed to keep a depth of 6m, but otherwise drift with the ocean currents. The positions derived using only ambient underwater noise was compared with those calculated using an array of acoustic pingers (shown by green diamonds). While the vehicles were drifting, a boat circled the drifting vehicles twice (once at approximately 11 m/s and once at approximately 4 m/s); the trajectory of the boat is shown with the start and end positions indicated. The right panel shows a close up of the AUE trajectories where the red bounding box matches the box on the left panel. Deployment times for both the boat and AUE trajectories are shown by the colorbar on the right. Our position estimates using only the underwater microphones are comparable to the much more complex, difficult to deploy, nad costly localization infrastructure that uses the five buoys.
Simultaneous localization and mapping (SLAM) is a common technique for robotic navigation, 3D modeling, and virtual/augmented reality. It employs a compute-intensive process that fuses images from a camera into a virtual 3D space. It does this by looking for common features (edges, corners, and other distinguishing image landmarks) over time and uses those to infer the position of the camera (i.e., localization). At the same time, it creates a virtual 3D map of the environment (i.e., mapping). Since this is a computationally expensive task, real-time applications typically can only create a crude or sparse 3D maps.
To address this problem, we built a custom computing system that can create dense 3D maps in real-time. Our system using an FPGA as a base processor. With careful design, it is possible to use an FPGA that it very efficient. “Careful design” typically equates to a painstaking, time consuming manual process to specify the low level architectural details needed to obtain efficient computation.
We have found that many SLAM algorithms barely run out of the box. And it is even more challenging to get a hardware accelerated version working. We hope that our open-source repository will be valuable to the broader community and enable them to develop even more powerful solutions.