Tuesday, 20 September 2016

Comprehensively testing software patches with symbolic execution


A large fraction of the costs of maintaining software applications is associated with detecting and fixing errors introduced by patches. Patches are prone to introducing failures and as a result, users are often reluctant to upgrade their software to the most recent version, relying instead on older versions which typically expose a reduced set of features and are frequently susceptible to critical bugs and security vulnerabilities. To properly test a patch, each line of code in the patch, and all the new behaviours introduced by the patch, should be covered by at least one test case. However, the large effort involved in coming up with relevant test cases means that such thorough testing rarely happens in practice.

...  the full blog post can be read in the IEEE Software Blog

Monday, 23 November 2015

Multi-Version Execution Defeats a Compiler-Bug-Based Backdoor

Cristian Cadar, Luís Pina, John Regehr



What should you do if you’re worried that someone might have exploited a compiler bug to introduce a backdoor into code that you are running? One option is to find a bug-free compiler. Another is to run versions of the code produced by multiple compilers and to compare the results (of course, under the additional assumption that the same bug does not affect all the compilers). For some programs, such as those whose only effect is to produce a text file, comparing the output is easy. For others, such as servers, this is more difficult and specialized system support is required.


Today we’ll look at using Varan the Unbelievable to defeat the sudo backdoor from the PoC||GTFO article. Varan is a multi-version execution system that exploits the fact that if you have some unused cores, running additional copies of a program can be cheap. Varan designates a leader process whose system call activity is recorded in a shared ring buffer, and one or more follower processes that read results out of the ring buffer instead of actually issuing system calls.  


Compilers have a lot of freedom while generating code, but the sequence of system calls executed by a program represents its external behaviour and in most cases the compiler is not free to change it at all. There might be slight variations e.g., due to different compilers using different libraries, but these can be easily handled by Varan. Since all correctly compiled variants of a program should have the same external behaviour, any divergence in the sequence of system calls across versions flags a potential security attack, in which case Varan stops the program before any harm is done.  


Typically, Varan runs the leader process at full speed while also recording the results of its system calls into the ring buffer. However, when used in a security-sensitive setting, Varan can designate some system calls as blocking, meaning that the leader cannot execute those syscalls until all followers have reached that same program point without diverging.  For sudo, we designate execve as blocking, since that is a point at which sudo might perform an irrevocably bad action.


So here’s the setup:
  1. We have a patched version of sudo 1.8.13 from the PoC||GTFO article. It runs correctly and securely when compiled by a correct C compiler, but improperly gives away root privileges when compiled by Clang 3.3 because the patch was designed to trigger a wrong-code bug in that compiler.
  2. We are going to pretend that we don’t know about the Clang bug and the backdoor. We compile two versions of the patched sudo: one with Clang 3.3, the other with the default system compiler, GCC 4.8.4.
  3. We run these executables under Varan. Since the critical system call execve is blocking, it doesn’t much matter which version is the leader and which is the follower.


Now let’s visit an Ubuntu 14.04 VM where both versions of sudo (setuid root, of course) and Varan are installed. We’re using a user account that is not in the sudoers file -- it should not be allowed to get root privileges under any circumstances. First let’s make sure that a sudo that was properly compiled (using GCC) works as expected:


$ /home/varan/sudo-1.8.13/install/bin/sudo-gcc cat /etc/shadow
Password:
test is not in the sudoers file.  This incident will be reported.


Next, we make sure that the backdoor is functioning as intended:
$ /home/varan/sudo-1.8.13/install/bin/sudo-clang cat /etc/shadow
Password:
root:!:16693:0:99999:7:::
...
test:$6$nL9rosCD$A.prR28WWZYgDHy.m6ovWYaRhP/OhOX7V9nDDirm9ZffjTWhLdJ7C171wTfLBpQNC2EoeEKAJpcEz9gV9ravL/:16693:0:99999:7:::
messagebus:*:16757:0:99999:7:::
colord:*:16757:0:99999:7:::


So far so good. Next let’s try the gcc-compiled sudo as the leader with the backdoored sudo as the follower:


$ vx-suid /home/varan/sudo-1.8.13/install/bin/sudo-gcc \
         /home/varan/sudo-1.8.13/install/bin/sudo-clang -- cat /etc/shadow
Password:
$
test is not in the sudoers file.  This incident will be reported.


What happened here is that the gcc-compiled leader runs as before, since it doesn’t ever try to execute an execve call.  When the backdoored follower tries to execute the malicious execve call, Varan detects the divergence and terminates both processes safely.


Now let’s try switching around the leader and follower, i.e., run the backdoored sudo as the leader with the gcc-compiled sudo as the follower:
$ vx-suid /home/varan/sudo-1.8.13/install/bin/sudo-clang \
         /home/varan/sudo-1.8.13/install/bin/sudo-gcc -- cat /etc/shadow
Password:
$


This time the leader tries to execute the malicious execve call, and Varan blocks its execution until the follower reaches the same system call or diverges. In this case, the follower tries to execute a write system call (to print “test is not in the sudoers file…”) and thus Varan detects divergence and again terminates execution safely.


In this example, we only ran two versions in parallel, but Varan can run more than two versions.  In terms of performance and resource utilization, security applications like sudo are a great match for multi-version execution: they are not CPU-bound, so any performance degradation is imperceptible to the user, and the extra cores are needed only briefly, during the critical security validation checks.  We are looking into applying this approach to other critical security applications (e.g. ssh-agent and password managers), and are investigating a way of hardening executables by generating a single binary with Varan and a bunch of versions, each version generated by a different compiler. We can then deploy this hardened executable instead of the original program.


Of course, Varan can detect misbehavior other than compiler-bug-based backdoors. Divergence could be caused by a memory or CPU glitch, by a plain old compiler bug that is triggered unintentionally instead of being triggered by an adversarial patch, or by a situation where an application-level undefined behavior bug has been exploited by only one of the compilers, or even where both compilers exploited the bug but not in precisely the same way. A nice thing about N-version programming at the system call level is that it won’t bother us about transient divergences that do not manifest as externally visible behaviour through a system call.


We’ll end by pointing out a piece of previous work along these lines: the Boeing 777 uses compiler-based and also hardware-based N-version diversity: there is a single version of the Ada avionics software that is compiled by three different compilers and then it runs on three different processors: a 486, a 68040, and an AMD 29050.

Thursday, 26 June 2014

Can we improve the journal review process in computer science?

As in all scientific disciplines, peer review plays a core role in computer science research. But one aspect that sets apart our discipline from others is that most areas of computer science are driven by conference rather than journal publications. This has been discussed numerous times, and whether our community should change its publication culture is a controversial subject; just Google "conferences vs. journals in computer science" if you are not familiar with this debate.

One key advantage of conference publications is their quick reviewing cycle, which is often assumed to be in conflict with a careful, high-quality reviewing process. While the high reviewing load and the tight deadlines imposed on program committee (PC) members of top conferences do indeed endanger the peer review process, I argue that the conference reviewing process is in many ways of higher-quality than that used by journals. I will discuss two such aspects below.

(1) Conference reviewing puts a lot of emphasis on discussions among PC members.  Top CS conferences have both an online discussion stage, typically lasting a couple of weeks, as well as an in-person PC meeting taking one or two days.  Given the emphasis that our community (justifiably!) places on these discussions, I find it surprising that this is completely absent from journal reviewing. When I review for a journal, I typically only receive the other reviewers' comments (if at all!), without any opportunity to further discuss the paper and clarify our respective positions. Contrast this with the conference reviewing process, where I often have lengthy discussions with the other PC members, both online and in-person, which I believe considerably improves our understanding of the paper and the overall reviewing process.

(2) Conferences have a much easier time convincing top researchers to dedicate a lot of their time reviewing submissions.  This has an impact both on the quality of the reviews (with experts sometimes unwilling to review for journals) and on the length of the reviewing cycle. If there is any chance for our community to adopt journals more widely (and I'm not arguing here either way), journals need to have a faster publication cycle: having to wait more than one year until your paper is published (and in many cases longer!)  is certainly not a big selling point. Whenever I ask editors of top journals in our field why things take so long, they mention the difficulty of finding reviewers. Conversely, I know many fellow junior and mid-career researchers who admit declining journal reviews as a general rule. However, these same people would rarely refuse the invitation to join the PC of a top conference, despite the latter requiring ten times more work! Why?

The answer is simple: journal reviewers are assigned a secondary role compared to conference reviewers. For instance, while they make a recommendation, they don't have any opportunity to further discuss the paper and their position with their co-reviewers, the final decision being taken solely by the editor.

More importantly, journals provide little incentives to reviewers compared to conferences. Firstly, the PC of a top conference is widely advertised, giving its members a lot of visibility in their community. Contrast this to journals, where only the editorial staff gets this benefit. Secondly, being part of a conference PC gives you the opportunity to meet and have technical discussions with top researchers in your community, both online and in person (not to mention that conference PCs sometimes organize informal workshops co-located with the PC meeting).  Contrast this to journal reviewing, where you don't get the chance to exchange a single word with your co-reviewers!

These are not the only strengths of conference reviewing, and there are certainly many weaknesses as well when compared to journal reviewing. But in relation to some of the points above, I believe journals could improve their reviewing process. In particular, why not give journal reviewers the possibility to have an online discussion? Our community justifiably puts a lot of emphasis on discussions among conference reviewers, and there is no reason for not having this be a core part of the journal reviewing process as well.