In brief my argument was
- Programming language design choices affect the kinds of vulnerabilities that programs written in those languages are susceptible to.
- The source of these vulnerabilities is not just ignorance by programmers, but includes rational trade-offs between correctness/security and tersity, completeness, maintainability, efficiency, and other concerns.
- A "semantic gap" exists where programmers (intentionally or unintentionally) use an abstraction that doesn't do quite what they want it to do.
- Often this gap is innocuous (silent overflow in 64b increment) but sometimes it has catastrophic consequences (naive string interpolation → shell injection).
- It is possible to close some of these gaps without unduly breaking existing programs by using static analysis, delayed binding, and opt-in defaults to infer intent.
"Security by Closing the Semantic Gap"
Security is about more than just cryptography; programming language design choices affect the way programmers design programs. We start with code samples in popular programming languages and show how it can be easy to write code that is almost correct, but that fails in ways that are catastrophic security-wise. We demonstrate how tweaking language definitions can close the "semantic gap," the difference between the intended effect of the code and its actual semantics which allows exploitable vulnerabilities to creep in.