Everyday we put our systems, our information, and ourselves at risk. While networked devices enable us and our digital agents to coordinate, communicate, and solve problems in ever-more automated and efficient ways, they require us to entrust our personal, proprietary, and security-critical information to complex and buggy software systems. In the future, software systems will only become more complex and the information stored in them more alluring to potential adversaries. I propose a quantitative and decidedly imperfect approach to improving the privacy and security of these future software systems. In this talk, I describe two projects following this quantitative approach. The first analyzes the ways in which private information is protected and lost in existing distributed constraint optimization algorithms, and how we can significantly reduce this loss through algorithmic changes. The second project tackles the myth that complex systems must be easier for the adversary to attack than the defense to defend. Though the solutions discussed in these projects do not guarantee against the loss of some information or some systems, they do quantify the inherent risk so informed trade offs between security and other desirable system properties can be made.