PrioriFI @ ASPLOS’26

This past week, Olivia Weng and lab alum Jennifer Switzer, journeyed to Pittsburgh to attend ASPLOS’26. Olivia presented her paper, PrioriFI (repopaper), a software fault-injection tool that prioritizes flipping the most sensitive parameter bits of a neural network first. PrioriFI argues that neural network fault-injection tools should be mindful of relying on bit-level monotonicity: that a more significant bit is more sensitive than a less significant one. We show the many exceptions to this common assumption and present the PrioriFI algorithm as a method of avoiding its pitfalls.  The PrioriFI project was a collaboration that included lab member Andres Meza and Nhan Tran of Fermilab.

Liv presenting PrioriFI @ ASPLOS’26

Olivia and Jen met many computer architects and had a great time attending ASPLOS, meeting up with former KRG undergrad researcher Subash Katel. Subash explained to them the cool quantum computing research he’s doing for his PhD at Princeton and introduced them to many researchers. It’s always great to reconnect.