06jun-pldi-tutorial
23 Pages
English

06jun-pldi-tutorial

-

Downloading requires you to have access to the YouScribe library
Learn all about the services we offer

Description

Computer security Goal: prevent bad things from happeningPLDI’06 Tutorial T1:  Clients not paying for services Critical service unavailableEnforcing and Expressing Security Confidential information leakedwith Programming Languages Important information damaged System used to violate laws (e.g., copyright)Andrew Myers  Conventional security mechanisms aren’tup to the challengeCornell Universityhttp://www.cs.cornell.edu/andruPLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 2Harder & more important Language-based securityIn the ’70s, computing systems were isolated.  Conventional security: program is black box software updates done infrequently by an experienced  Encryptionadministrator. Firewalls you trusted the (few) programs you ran. System calls/privileged mode physical access was required. Process-level privilege and permissions-based access control crashes and outages didn’t cost billions. Prevents addressing important security issues:The Internet has changed all of this. Downloaded and mobile code we depend upon the infrastructure for everyday services Buffer overruns and other safety problems you have no idea what programs do. Extensible systems software is constantly updated – sometimes without your knowledgeor consent.  Application-level security policies a hacker in the Philippines is as close as your neighbor. System-level security validation everything is ...

Subjects

Informations

Published by
Reads 9
Language English

Computer security
 Goal: prevent bad things from happening
PLDI’06 Tutorial T1:  Clients not paying for services
 Critical service unavailableEnforcing and Expressing Security
 Confidential information leakedwith Programming Languages
 Important information damaged
 System used to violate laws (e.g., copyright)
Andrew Myers  Conventional security mechanisms aren’t
up to the challengeCornell University
http://www.cs.cornell.edu/andru
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 2
Harder & more important Language-based security
In the ’70s, computing systems were isolated.  Conventional security: program is black box
 software updates done infrequently by an experienced  Encryption
administrator.
 Firewalls
 you trusted the (few) programs you ran.
 System calls/privileged mode
 physical access was required.
 Process-level privilege and permissions-based access control
 crashes and outages didn’t cost billions.
 Prevents addressing important security issues:The Internet has changed all of this.
 Downloaded and mobile code
 we depend upon the infrastructure for everyday services
 Buffer overruns and other safety problems you have no idea what programs do.
 Extensible systems software is constantly updated – sometimes without your knowledge
or consent.  Application-level security policies
 a hacker in the Philippines is as close as your neighbor.
 System-level security validation
 everything is executable (e.g., web pages, email).
 Languages and compilers to the rescue!
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 3 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 4
Outline
 The need for language-based security
 Security principles Security principles
 Security properties
 Memory and type safety
 Encapsulation and access control
 Certifying compilation and verification
 Security types and information flow
 Handouts: copy of slides
 Web site: updated slides, bibliography
www.cs.cornell.edu/andru/pldi06-tutorial
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 5
1Conventional OS security Access control model
 Model: program is black box  The classic way to prevent “bad things”
 Program talks to OS via protected from happening
interface (system calls)  Requests to access resources (objects)
 Multiplex hardware are made by principals
 Isolate processes from each other
 Reference monitor (e.g., kernel) permits or Restrict access to persistent data (files)
denies request+ Language-independent, simple, limited
User-level Program
Hardware request Reference Object
Principalmemory Monitor (Resource)Operating System protection
Kernel
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 7 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 8
st Authentication vs. Authorization 1 guideline for security
Principle of complete mediation: Abstraction of a principal divides
enforcement into two parts Every access to every object must be checked by
the reference monitor
 Authentication: who is making the request
 Authorization: is this principal allowed to make
Problem: OS-level security does not supportthis request?
complete mediation
request ObjectReferencePrincipal Monitor (Resource)
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 9 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 10
OS: Coarse-grained control Need: fine-grained control
 Modern programs make security decisions Operating system enforces security at
with respect to application abstractionssystem call layer
 UI: access control at window level
 Hard to control application when it is not making
 mobile code: no network send after file readsystem calls
 E-commerce: no goods until payment
 Security enforcement decisions made with
 intellectual property rights management
regard to large-granularity objects
 Need extensible, reusable mechanism for
 Files, sockets, processes enforcing security policies
 Coarse notion of principal:  Language-based security can support an extensible
protected interface, e.g., Java security
 If you run an untrusted program, should the
authorizing principal be “you”?
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 11 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 12
2nd2 guideline for secure design Least privilege problems
 OS privilege is coarse-grained: user/groupPrinciple of Least Privilege: each principal
 Applications need finer granularityis given the minimum access needed to
 Web applications: principals unrelated to OS principals
accomplish its task. [Saltzer & Schroeder
 Who is the “real” principal?
‘75]
 Trusted program? Full power of the user principal
 Untrusted? Something lessExamples:
 Trusted program with untrusted extension: ?
+ Administrators don’t run day-to-day tasks as root. So
 Untrusted program accessing secure trusted subsystem: ?
“rm –rf /” won’t wipe the disk.
 Requests may filter through a chain of programs
- fingerd runs as root so it can access different users’ or hosts
.plan files. But then it can also
 Loss of information is typical
“rm –rf /”.
 E.g., client browser → web server → web app → database
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 13 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 14
rd3 guideline: Small TCB Small TCB and LBS
Trusted Computing Base (TCB) :  Conventional wisdom (c. 1975):
components whose failure compromises  “operating system is small and simple, compiler is
large and complex”the security of a system
 OS is a small TCB, compiler a large one
 Example: TCB of operating system includes
 c. 2003:kernel, memory protection system, disk image
 OS (Win2k) = 50M lines code, compiler ~ 100K lines
 Small/simple TCB:
code
⇒ TCB correctness can be checked/tested/reasoned about more
 Hard to show OS implemented correctlyeasily ⇒ more likely to work
 Many authors (untrustworthy: device drivers) Large/complex TCB:
 Implementation bugs often create security holes
⇒ TCB contains bugs enabling security violations
 Can now prove compilation, type checking correctProblem: modern OS is huge, impossible to verify
 Easier than OS: smaller, functional, not concurrent
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 15 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 16
The Gold Standard [Lampson] When to enforce security
 Authenticate Possible times to respond to security
 Every access/request associated with correct principal violations:
 Authorize  Before execution:
 Complete mediation of accesses analyze, reject, rewrite
 Audit  During execution:
 Recorded authorization decisions enable after-the-fact monitor, log, halt, change
enforcement, identification of problems
 After execution:
roll back, restore, audit, sue, call police
 Language-based techniques can help
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 17 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 18
3Language-based techniques Maturity of language tools
A complementary tool in the arsenal: programs don’t have Some things have been learned in the last 25
to be black boxes! Options: years…
 How to build a sound, expressive type system
1. Analyze programs at compile time or load time to that provably enforces run-time type safety
ensure that they are secure ⇒ protected interfaces
2. Check analyses at load time to reduce TCB  Type systems that are expressive enough to
3. Transform programs at compile/load/run time so that encode multiple high-level languages
they can’t violate security, or to log actions for auditing. ⇒ language independence
 How to build fast garbage collectors
⇒ trustworthy pointers
 On-the-fly code generation and optimization
⇒ high performance
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 19 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 20
Caveat: assumptions and abstraction
 Arguments for security always rest on assumptions:
 “the attacker does not have physical access to the hardware”
 “the code of the program cannot be modified during execution” A sampler of attacks
 “No one is monitoring the EM output of the computer”
 Assumptions are vulnerabilities
 Sometimes known, sometimes not
 Assumptions arise from abstraction
 security analysis only tractable on a simplification (abstraction) of
actual system
 Abstraction hides details (assumption: unimportant)
 Caveat: language-based methods often abstract
aspects of computer systems
 Need other runtime, hardware enforcement mechanisms to
ensure language abstraction isn’t violated—a separation of
concerns
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 21
Attack: buffer overruns Execute-only bit?
Payload
 Stack smashing executes code on stack -- markchar buf[100]; Return address stack non-executable?

 Return-to-libc attack defeats this:gets(buf);
void system(char * arg) {buf ...
r0 = arg;Program
execl(r0, ...); // “return” here with r0 set
sp Stack ...
}
 Attacker gives long input that overwrites
 Not all dangerous code lives in the code segment…function return address, local variables
 More attacks: pointer subterfuge (function- and
 “Return” from function transfers control to
data-pointer clobbering), heap smashing,
payload code
overwriting security-critical variables…
 Moral: SEGVs can be turned into attacks
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 23 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 24
4Attack: format strings Attack: SQL injection
fgets(sock, s, n);  Web applications typically construct SQL
… database queries.
fprintf(output, s);  In PHP:
$rows=mysql query("UPDATE users SET pass=‘$pass’
WHERE userid=‘$userid’”);
 Attack: pass string s containing a %n
 Attacker uses userid of ‘ OR ‘1’ = ‘1’. Effect:
qualifier (writes length of formatted input UPDATE users SET pass=<pass> WHERE userid=‘’ OR ‘1’=‘1’
to arbitrary location)  69% of Internet security vulnerabilities are
 Use to overwrite return address to in web applications [Symantec]
“return” to malicious payload code in s.
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 25 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 26
Using system subversion 1988: Morris Worm
 Assume attacker can run arbitrary code Penetrated an estimated 5 to 10 percent of
the 6,000 machines on the internet.(possibly with dangerous privileges)
Used a number of clever methods to gain Initial foothold on target system enables
access to a host.additional attacks (using other holes)
 brute force password guessing
 Worms: programs that autonomously  bug in default sendmail configuration
attack computers and inject their own code  X windows vulnerabilities, rlogin, etc.
 buffer overrun in fingerdinto the computer
Remarks:
 Distributed denial of service: many
 System diversity helped to limit the spread.
infected computers saturate target network  “root kits” for cracking modern systems are easily
available and largely use the same techniques.
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 27 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 28
1999: Love Bug & Melissa Why did it succeed?
Both email-based viruses that exploited:  Visual Basic scripts invoked transparently upon
 a common mail client (MS Outlook) opening
 trusting (i.e., uneducated) users
 Run with full privileges of the user
 VB scripting extensions within messages to:
 lookup addresses in the contacts database  Kernel doesn’t know about fine-grained
 send a copy of the message to those contacts application abstractions or related security
Melissa: hit an estimated 1.2 million machines. issues: mail messages, contacts database, etc.
Love Bug: caused estimated $10B in damage.  Recipients trusted the sender – after all, they
know themRemarks:
 no passwords or crypto involved  Interactions of a complex system were
unanticipated
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 29 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 30
5A solution for Melissa? 2002: MS-SQL Slammer worm
 Jan. 25, 2002: SQL and MSDE servers on Turn off all executable content?
Internet turned into worm broadcasters no problem when email was just text.
 but executable content is genuinely useful.  Buffer-overrun vulnerability
 ex: automated meeting planner agent, postscript, Mpeg4 codecs,  Spread to most vulnerable servers
client-side forms, etc. on the Internet in less than 10 min!
 US DoD tried to do this : revolt  Denial of Service attack
 Fundamental tension:  Affected databases unavailable
 Full-bandwidth network load ⇒ widespread service outage
 modern software wants to be open and extensible.
 “Worst attack ever” – brought down many sites, not Internet
 programmable components are ultimately flexible.
 Postscript, Emacs, Java[script], VB, Jini, ActiveX, plug-n-play...  Can’t rely on patching!
 security wants things to be closed: least privilege.
 Infected SQL servers at Microsoft itself
 turning off extensibility is a denial-of-service attack.
 Owners of most MSDE systems didn’t know they were
running it…extensibility again
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 31 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 32
Virus scanning?
 Scan for suspicious code
 e.g., McAfee, Norton, etc. Security Properties
 based largely on a lexical signature.
 the most effective commercial tool
 but only works for things you’ve seen
 Melissa spread in a matter of hours
 virus kits make it easy to disguise a virus
 “polymorphic” viruses
 Doesn’t help with worms
 Unless you can generate a filter automatically…
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 33
Security properties Security policies
 Execution (trace) of a program is a
sequence of states s s s … encountered1 2 3
during execution
 Program has a set of possible executions T
Security = “bad things don’t happen”  A generic formalization: security policy is a
predicate P on sets of executions
 Program satisfies policy if P(T)
What kinds of properties
 Examples:
should computing systems satisfy?  P(T) if no null pointer is deferenced in any trace in T
 P(T) if every pair of traces in T with the same initial
value for x have the same final value for y
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 35 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 36
6Safety properties Liveness properties
 “Nothing bad ever happens”  “Something good eventually happens”
 A property is a policy that can be enforced using  If P’(t) does not hold, every finite sequence t can be
extended to satisfy P’individual traces
 P(T) ⇔ ∀t∈T. P’(t) where P’ is some predicate on traces  Example: nontermination
 Safety property can be enforced using only  “The email server will not stop running”
history of program
 Violated by denial of service attacks
 If P’(t) does not hold, then all extensions of t are also bad
 Amenable to run-time enforcement: don’t need to know future  Can’t enforce purely at run time
 Examples:
 Interesting properties often involve both
 access control (e.g. checking file permissions on file open) safety and liveness
 memory safety (process does not read/write outside its own
memory space)  Every property is the intersection of a safety property
 type safety (data accessed in accordance with type) and a liveness property [Alpern & Schneider]
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 37 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 38
Memory safety and isolation Control-flow integrity
 Process isolation: running process cannot  Actual control flow must conform to a “legal
access memory that does not belong to it execution”
 Usually enforced by hardware TLB  Code injection attacks violate CFI.
 TLB caches virtualphysical address mappings
 Weak: control can only be transferred to legal Invalid virtual addresses (other processes) cause kernel trap
 Cross-domain procedure calls/interprocess communication program code points
(RPC/IPC) expensive (TLB misses)
 Rules out classic buffer overrun attacks
 Memory safety: running process does not  Not provided by C:
int (*x)() = (int(*)()) 0xdeadbeef; (*x)();attempt to dereference addresses that are not
 Stronger: control must agree with a DFA or CFGvalid allocated pointers
 No read from or write to dangling pointers capturing all legal executions
 Not provided by C, C++ :  Can be enforced cheaply by dynamic binary
int *x = (int *)0x14953300; rewriting as in DynamoRIO [Kiriansky et al., 2002]*x = 0x0badfeed;
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 39 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 40
Type safety Access control
 Values manipulated by program are used in  Access control decision:
accordance with their types  principal × request × object → boolean
 Stronger than memory safety!
 Access control matrix [Lampson]:
 Can be enforced at run-time (Scheme), compile-
objects
time (ML), mix (Java) file1 file2 file3principals
user1 r rw rx Abstract data types: data types that can only be
user2 r raccessed through a limited interface
user3 rw r
 can protect their internal storage (private data) Allowed requests
 Columns of matrix: access control lists (ACLs) Kernel = ADT with interface = system calls,
abstraction barrier enforced at run time by  Correct enforcement is a safety property
hardware
 Safety can be generalized to take into account denial of access,
corrective action by reference monitor
[Hamlen][Ligatti][Viswanathan]
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 41 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 42
7Information security: confidentialityInformation security
 Confidentiality: valuable information should not Sometimes computer security is an aspect
be leaked by computation.of physical security
 Make sure attackers cannot take over electric power
distribution grid, military command-and-control, etc.
 Can use type safety, access control to enforce rules
 Also known as secrecy, though sometimes a
 What we’re trying to protect can also be
distinction is made:the information on the computer:
 Secrecy: information itself is not leaked
 Confidentiality: nothing can be learned about informationinformation security
 Simple (access control) version: Memory safety, type safety don’t directly help
 Only authorized processes can read from a file
 But… when should a process be “authorized” ?
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 43 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 44
Confidentiality: a Trojan horse End-to-end confidentiality
 Access control does not help after access Access control controls release of data
control check is donebut does not control propagation
 End-to-end confidentiality:
 Security violation even ?
Information should not be improperly releasedA Bwith “safe” operations ...
by a computation no matter how it is used
personal.txt  Enforcement requires tracking information flow
 Encryption provides end-to-end secrecy—but prevents most% ls -l personal.txt
computationmorerw------- personal.txt
% more personal.txt
output device...
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 45 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 46
Information security: integrity Privacy and Anonymity
 Integrity: valuable information should not be  Anonymity:
damaged by computation
 individuals (principals) and their actions cannot be
 Simple (access control) version: linked by an observer
 Only authorized processes can write to a file
 alt: identity of participating principals cannot be
 But… when should a process be “authorized” determined even if actions are known
 End-to-end version:
 Privacy: encompasses aspects of
 Information should not be updated on the basis of less
trustworthy information confidentiality, secrecy, anonymity
 Requires tracking information flow in system
 Information flow is not a property [McLean94]
 No information flow from x to y:

P(T) if every pair of traces in T with the same initial value for x
always have the same value for y
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 47 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 48
8Availability
 System is responsive to requests
 DoS attacks: attempts to destroy availability Enforcing safety properties
(perhaps by cutting off network access)
 Fault tolerance: system can recover from faults
(failures), remain available, reliable
 Benign faults: not directed by an adversary
 Usual province of fault tolerance work
 Malicious or Byzantine faults: adversary can
choose time and nature of fault
 Byzantine faults are attempted security violations
 usually limited by not knowing some secret keys
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 49
Reference Monitor Requirements for a Monitor
 Must have (reliable) access to informationObserves the execution of a program and
halts the program if it’s going to violate the about security-relevant actions of the program
 e.g., what instruction is it about to execute?security policy.
 Must have the ability to “stop” the program
Common Examples:
 can’t stop a program running on a different machine
 memory protection  … or transition to a “good” state.
 access control checks  Must protect the monitor’s state and code
 routers from tampering.
 firewalls
 key reason why a kernel’s data structures and code aren’t
accessible by user code
Most current enforcement mechanisms are  low overhead in practice
reference monitors
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 51 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 52
Pervasive mediation What policies?
Reference monitor Interpreter Program instrumentation
 Reference monitors can only see the past
EM Extension
Extension Extension  They can enforce safety properties but not livenessEM
properties
EM Base system Base system Assumptions:
Base system
 monitor can have access to entire state of
computation.
OS Reference monitor: won’t capture all events  monitor can have arbitrarily large state
 safety properties enforced are modulo computationalWrapper/interpreter: performance overhead
power of monitorInstrumentation: merge monitor into program
 But: monitor can’t guess the future – the predicate it different security policies ⇒ different merged-in code
 simulation does not affect program uses to determine whether to halt a program must be
 pay only for what you use computable.
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 53 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 54
9Software Fault Isolation (SFI) One way to SFI: Interpreter
void interp(int pc, reg[], mem[], code[], memsz, codesz) { Wahbe et al. (SOSP’93)
while (true) {
 Goal is process isolation: keep software components in if (pc >= codesz) exit(1);
int inst = code[pc], rd = RD(inst), rs1 =same hardware-based address space, provide
RS1(inst),
 Idea: application can use untrusted code without memory protection rs2 = RS2(inst), immed = IMMED(inst);
overhead switch (opcode(inst)) {
 Software-based reference monitor isolates components case ADD: reg[rd] = reg[rs1] + reg[rs2]; break;
case LD: int addr = reg[rs1] + immed;into logical address spaces.
if (addr >= memsz) exit(1);
 conceptually: check each read, write, & jump to make sure it’s within the reg[rd] = mem[addr];
component’s logical address space. break;
 hope: communication as cheap as procedure call. case JMP: pc = reg[rd]; continue;
 worry: overheads of checking will swamp the benefits of communication. ...
0: add r1,r2,r3}
 Only provides memory isolation, doesn’t deal with other
pc++; 1: ld r4,r3(12)
security properties: confidentiality, availability,… }} 2: jmp r4
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 55 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 56
Interpreter pros and cons Partial Evaluation (PE)
Pros: A technique for speeding up interpreters.
 easy to implement (small TCB.)  we know what the code is.
 works with binaries (high-level language-independent.)  specialize the interpreter to the code.
 unroll the main interpreter loop – one copy for each easy to enforce other aspects of OS policy
instruction
Cons:  specialize the switch to the instruction: pick out that case
 compile the resulting code terrible execution overhead (25x? 70x?)
 Can do at run time with dynamic binaryIt’s a start.
rewriting (e.g., DynamoRIO)
 Keep code cache of specialized code
 Reduce load time, code footprint
PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 57 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 58
Example PE Sandboxing
Interpreter
 SFI code rewriting is “sandboxing”Original Binary:
 Requires code and data for a security domain case LD: int addr = reg[rs1] + immed;0: add r1,r2,r3
if (addr >= memsz) exit(1);1: ld r4,r3(12) are in one contiguous segment
reg[rd] = mem[addr];2: jmp r4
 upper bits are all the same and form a segment id. ... break;
 separate code space to ensure code is not modified.
 Inserts code to ensure load and stores are in the
Resulting Code logical address spaceSpecialized interpreter:
0: add r1,r2,r3 reg[1] = reg[2] + reg[3];  force the upper bits in the address to be the segment id
1: addi r5,r3,12 addr = reg[3] + 12;
 no branch penalty – just mask the address2: subi r6,r5,memsz if (addr >= memsz) exit(1);
 re-allocate registers and adjust PC-relative offsets in code.3: jab _exit reg[4] = mem[addr];
4: ld r4,r5(0) pc = reg[4]  simple analysis used to eliminate some masks

PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 59 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 60
10

)