Quantum computing promises exponential speedups for a class of important problems. However, this potential can only be realized using large-scale quantum systems with a large number of qubits. Unfortunately, building a scalable quantum computer has several challenges that must be overcome, including the design of conventional computing and memory systems that can efficiently interface with the quantum substrate while obeying the thermal and power constraints dictated by the quantum devices. As computer architects, we try to address the system design challenges for scalable quantum computers.
Hardware is currently an emerging source of vulnerabilities for attacks threatening data confidentiality and integrity. Numerous attacks have emerged targeting different layers of the hardware stack such as processors (Spectre, Meltdown, and others), caches (side-channel attacks) and main-memories (cold-boot, rowhammer, and other physical attacks), that are capable of either leaking or tampering sensitive data. One of the biggest challenges we address in this project is how to redesign hardware to be secure against current and future attacks, while keeping the cost of security minimal. At the same time, we leverage learnings from secure hardware design to discover new faster and stealthier attacks.
The ability of Deep Learning to solve several challenging classification problems with high accuracy has proliferated the use of Deep Neural Networks (DNN) in various products and services. Unfortunately, DNNs are susceptible to a variety of adversarial attacks that allows an adversary to fool the model into misclassifying an input, leak sensitive user data and even steal the functionality of the model entirely. This poses a serious barrier for the adoption of deep learning in applications where security, confidentiality and privacy of the model and data are important. We explore methods to defend against such adversarial attacks with minimal impact on the accuracy of the model.