Share What We Say

Filter by:


FREAK Attack: 90’s NSA actions boomerang

Leandro Boffi - Fri, 2015-03-06 13:08

Back in early 90’s in the US was illegal to export strong crypto code. Sending crypto code overseas was considered something similar to export weapons. At that time the NSA and the US government banned people from selling software to other countries unless the code used involved encryption keys no longer than 512 bits.

The idea was to export weak encryption to the rest of the world to keep the stronger stuff at home.

Although this restrictions were removed long time ago, some TLS/SSL implementations still support these 90’s ciphers.

Last Tuesday security researchers found a vulnerability on some TLS/SSL implementations that allows an attacker to force clients and servers connecting over HTTPS to use these weakened encryption mechanisms, which the attacker can break to steal or manipulate sensitive data like session cookies or credentials.

Is this critical? Well, let me put it in this way: Microsoft just said that All windows versions are vulnerable. This means if you’re using Windows, an attacker on your network can potentially force Internet Explorer and other software using the Windows Secure Channel component to use weak encryption over the web and capture your credentials, cookie sessions or any other sensitive information.

How does it work?

If you want to know more about the attack, here is a very detailed analysis of it, but roughly, this is how it works:

1. In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
2. The MITM attacker changes this message to ask for ‘export RSA’.
3. The server responds with a 512-bit export RSA key, signed with its long-term key.
4. The client accepts this weak key due to the OpenSSL/SecureTransport bug.
5. The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
6. When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
7. From here on out, the attacker sees plaintext and can inject anything it wants.

Who is vulnerable?

The FREAK (Factoring RSA Export Keys) attack is possible when both, client and server, are vulnerable.

From clients perspective seems that many major browsers in all major platforms are vulnerable. As I said before couple of hours ago Microsoft published this article saying that All versions of Windows are vulnerable with no patch yet. Also SecureTransport library used by the Safari web browser on iPhones, iPads and OS X Macs, and OpenSSL by Android browsers are also vulnerable implementations. Have in mind that this doesn’t only affects to browsers, but also native apps using vulnerable TLS libraries.

This is the list of known vulnerable browsers:

Internet Explorer (All versions)
Chrome on Mac OS (Patch available now)
Chrome on Android
Safari on Mac OS (Patch expected next week)
Safari on iOS (Patch expected next week)
Stock Android Browser

You can check if your client is vulnerable here:

From servers, any server supporting RSA_EXPORT cipher suites is potentially vulnerable. Almost any popular webserver Nginx, Apache, IIS, etc may be vulnerable depending of its configuration, so if you are running a webserver you must disable export ciphers asap.

Node.js v0.10.36, v0.12 and io.js are not vulnerable since they use a version of OpenSSL that is more recent and in which the vulnerability was fixed, unfortunately in older versions (0.10.35 and older, including all versions of v0.8.x) the TLS client uses default OpenSSL cipher suites, which makes them vulnerable (of course you can always limit the ciphers when creating the TLS createServer)

You can test your server using this online tool:

Categories: Blogs

Gmail App for iOS: An example of a terrible security practice in mobile apps

Leandro Boffi - Wed, 2015-01-21 12:45

Today I just installed by the first time the Gmail official client for iOS and I was really surprised on how a company like Google has produced such an insecure app, so, before of uninstalling the app forever from my iPhone I decided to write this post to share with you why you must do the same as soon as possible and also show a very common dangerous practice in mobile apps.

The dream of phishing scammers

bofa The main security concern about the Gmail client for iOS is that it uses a very dangerouse security practice, that unfortunately is very common in this days mobile apps: it opens unknown links in an embedded webview.

Basically when you receive a new email containing a link, when you click on the link it launches the link inside of the app, in the embedded web view, instead of launching a browser.

This is the worst thing that you can do, from security standpoint, in an app which basic functionality is to receive messages from other people (even strangers) and it seems the dream of phishing scammers, because when you use an embedded web view you don’t have any UI element protecting you from phishing: you don’t have any TLS padlock validation icon nor address bar like you would have in a browser.

So while you are looking at, what you think is, your home banking login screen, you could be at and you don’t have any clue of that.

How can it get worse? Spoofing…

Please don’t do this at home, but it get worse if you add a bit of email spoofing to this thing, specially on Apple devices, because, for some reason that I can’t explain, spoofing an Apple email address is a very easy thing to do, due to they have on their DNS configuration the SPF record set to ~all instead of -all:


Which means SoftFail instead of Fail. For this reason if an attacker spoofs any address the victim will not see any error in the gmail client, so he will think that it is a valid Apple email.

So, to collect victim iCloud credentials you just need to go to some online email spoofer and send some email like:


And that’s all, this is how the victim phone will looks like:


Notice that when reading the email there is no clue about that the email is a fake, it says and it doesn’t have any warning, at the same time, when we click the link, thanks to the webview, there is no clue that we are not in, it only shows the window.title content, which you know can be set to anything.

The right approach

Basically, this kind of attack would be imposible if the Gmail application had launched the link in a browser instead of using the embedded webview because the UI elements (padlock and address bar) would tell you where you are, and if you are using a TLS/SSL against the right host.

So, What you think now? Will you uninstall the gmail app?

Categories: Blogs

Reloading node with no downtime

Jose Romaniello Blog - Sun, 2015-01-18 20:22

I wrote a blog post about Unix signals and Graceful shutdown in node.js applications five months ago. In this article I will explain how to reload a node.js application with no downtime.

One of the things that I like about nginx is how it handles configuration changes Controlling nginx. The master process "reload the configuration" by creating new worker process when it receives the SIGHUP signal.

Node.js comes with a cluster module that allows us to do very powerful things.

For this example I will use one worker but it can be extended to use as many workers as you want.


var cluster = require('cluster'); console.log('started master with ' +; //fork the first process cluster.fork(); process.on('SIGHUP', function () { console.log('Reloading...'); var new_worker = cluster.fork(); new_worker.once('listening', function () { //stop all other workers for(var id in cluster.workers) { if (id === continue; cluster.workers[id].kill('SIGTERM'); } }); });

The master process start the first worker and then listen to the SIGHUP signal. Then when it receives a SIGHUP signal it fork a new worker and wait the worker until is listening on the IPC channel, once the worker process is listening it kill the other workers.

This works out of the box because the cluster module allows several worker process to listen on the same address.


var cluster = require('cluster'); if (cluster.isMaster) { require('./master'); return; } var express = require('express'); var http = require('http'); var app = express(); app.get('/', function (req, res) { res.send('ha fsdgfds gfds gfd!'); }); http.createServer(app).listen(8080, function () { console.log('http://localhost:8080'); });

This is the entry point for the application, it is a simple express application with the exception of the first part.

You can test this as follows:

I've uploaded a more complete example to github.

Categories: Blogs

WS-Trust Proof-of-Possession (PoP) tokens with client and server entropy (with partial keys) – Part 1

Leandro Boffi - Thu, 2015-01-15 12:28

As a security feature WS-Trust supports Proof-of-Possession Tokens. In this post I want to show you how you can consume a service that requires PoP token security with client and server entropy (going deep in a min). This method has been tested with Microsoft Dynamics CRM and ADFS.

This is a very long topic, so I will split it into two posts. This, the first one, will show how to request the token and calculate the PoP key (shared key), in the second one I will show how to sign the request using this key.

But, lets start with a little of context…

Proof-of-Possession tokens (PoP) vs Bearer tokens

To start understanding a PoP token we can compare it with its more common alternative: the Bearer token.

Bearer tokens are like cash, the mere fact of having one allows you to use it without any other step/verification. In the opposite way PoP tokens are like credit card, the mere fact of possessing it is not enough, to use it you must prove that you are the real owner of it by using, for example, a valid id.

When we use Bearer tokens the security of our applications rely on some transport encryption mechanism like TLS/SSL, that is ok in some cases but there are some scenarios that require stronger security mechanisms. PoP tokens add a security layer, because if someone captures one of your tokens he will still need to prove that he is the real owner of the token in order to use it on your application.

What is the problem with TLS/SSL?

Well, as I said, rely on TLS/SSL is ok in some cases but in some others you want stronger security features. The first problem of relying only on TLS/SSL is that you are putting all the eggs in one basket and in any 101 security class you will learn that the main security principle is layering your security.

On the other hand, TLS/SSL security depends on the client application validating the certificate, other way you are completely exposed to Man-in-the-middle attacks. That means that you are moving the entire security of your service to the client and you know what? developers and TLS/SSL don’t like each other.

If you google for “How to handle SSL certificate error” the first thing that you will find is a lot of ways of bypassing this validation in any platform. For example, Node.js just changed the default from not validating to validating the certificate in version 0.10.x.

Now, before of going on how WS-Trust implements this feature, let me share with you the good new: there is some work on progress to take PoP tokens to OAuth. You can read more about that here: OAuth Proof-of-Possession Drafts.

WS-Trust and PoP Tokens

To start look at how we can use PoP tokens on WS-Trust lets look at the spec for a PoP definition:

A proof-of-possession (POP) token is a security token that contains secret data that can be used to demonstrate authorized use of an associated security token, thereby the final service (relying party) can validate that the caller is the “real” owner of the token that he is presenting. Typically, although not exclusively, the POP token consist of a key known by the relying party.

Now, you need to know that WS-Trust specifies two ways of use PoP token keys: specific and partial.

When you use specific keys, the client can specify the key when he requests the token or the security token service (ADFS for example) can retrieve the key to be used. In both cases you just need to use the specific key to sign the requests you perform to the relying party (final service).

The partial keys scenario is a bit different because you will use two keys, one for the server, and one for the client, and the final key, the key with which you will sign the requests you perform to the relying party must be calculated combining both keys.

In this post I will explain the partial scenario that is the more complex, but at the same time, the more secure scenario.

Let’s start!

The first thing that you need to know if which type of keys does the service you want to call needs. You will find that described with WS-Policy in the WSDL file of the service:

<sp:Trust13 xmlns:sp=""> <wsp:Policy> <sp:MustSupportIssuedTokens/> <sp:RequireClientEntropy/> <sp:RequireServerEntropy/> </wsp:Policy> </sp:Trust13>

In this example the service requires client and server entropy, that means partial keys.

Request the token

Now we need to request a token, I’ve explained how to request a token in this post from any platform and language, but it will be basically a SOAP call to the WS-Trust endpoint of our STS (for example ADFS). In the body of our SOAP envelope we will need to specify the Request Security Token (A predefined WS-Trust format to request tokens), the key we want to use:

<t:RequestSecurityToken xmlns:t=""> <t:RequestType></t:RequestType> <wsp:AppliesTo xmlns:wsp=""> <EndpointReference xmlns=""> <Address></Address> </EndpointReference> </wsp:AppliesTo> <t:Entropy> <t:BinarySecret Type=""> yEEN5hsRamzDqFKmNqvp+3d2yzGOU+czcEeEXVJJ4fA= </t:BinarySecret> </t:Entropy> <t:KeySize>256</t:KeySize> <t:KeyType></t:KeyType> <t:ComputedKeyAlgorithm></t:ComputedKeyAlgorithm> </t:RequestSecurityToken>

This key yEEN5hsRamzDqFKmNqvp+3d2yzGOU+czcEeEXVJJ4fA=, specified using the Entropy tag will be our “Client Entropy”.

As you can see, we are also setting the KeySize to 256, they KeytType to symmetric key, and the ComputedKeyAlgorithm to PSHA1, this is the algorithm that we are going to use to combine both keys. While it can be extended, the default algorithm is PSHA1 specified in the TLS Spec.

You have to have in mind that all of this parameters are configurable and depend on the Relying Party configuration, so you should read all of them from WSDL file of the service you want to call, inside of the RequestSecurityTokenTemplate element.

Now, after you send the request for the token, the server will reply something like this:

<t:RequestSecurityTokenResponse xmlns:t=""> <t:Entropy> <t:BinarySecret>TUv/+WgHQYY2nR3kqB/5/Zac117tkBf2CkxWvs4G2pA=</t:BinarySecret> </t:Entropy> <t:RequestedSecurityToken> <xenc:EncryptedData>...</xenc:EncryptedData> </t:RequestedSecurityToken> </t:RequestSecurityTokenResponse>

I removed a lot of code to improve presentation, but inside of the response you will find two important elements, the first one is the Entropy as you can imagine, this is the “Server entropy”, and the second one is the RequestedSecurityToken containing the SAML Assertion. Now, in this scenario, the SAML Assertion must be encrypted to ensure that only the Relying Party (the service you want to call) can decrypt it because it will contain the key with which you need to sign the requests to prove that you are the real owner of this assertion.

Remember that the whole idea of this scenario is that if somehow your token is leaked the possession of it shouldn’t be enough to authenticate on the relying party (the service), so it doesn’t make sense to send the key with which you will prove that you are the real owner of the token inside of the assertion if it is not encrypted.

Calculate the shared secret (PoP key)

Now we have the three elements that we need to call the service: the SAML Assertion (encrypted) and the Client and Server Entropies.

The first thing that we need to do is calculate the final key with which we are going to sign the requests. To do that we are going to use the PSHA1 algorithm to combine both keys (client and server). As I said before, this algorithm is specified in the TLS Spec.

I’ve published a Node.js module that implements the PSHA1 algorithm, you can find it here: But for you to have an idea this is the Node.js implementation:

var crypto = require('crypto'); module.exports = function (secret, seed, keySize) { keySize = keySize || 256; var clientBytes = new Buffer(secret, 'base64'); var serverBytes = new Buffer(seed, 'base64'); var sizeBytes = keySize / 8; var sha1DigestSizeBytes = 160 / 8; // 160 is the length of sha1 digest var buffer1 = serverBytes; var buffer2 = new Buffer(sha1DigestSizeBytes + serverBytes.length); var pshaBuffer = new Buffer(sizeBytes) var i = 0; var temp = null; while (i < sizeBytes) { buffer1 = new Buffer(crypto.createHmac('sha1', clientBytes) .update(buffer1).digest(), 'binary'); buffer1.copy(buffer2); serverBytes.copy(buffer2, sha1DigestSizeBytes); temp = new Buffer(crypto.createHmac('sha1', clientBytes) .update(buffer2).digest(), 'binary'); for (var x = 0; x < temp.length; x++) { if (i < sizeBytes) { pshaBuffer[i] = temp[x]; i++; } else { break; } }; } return pshaBuffer.toString('base64'); }

In this example, secret is the client key and seed is the server key. After applying this algorithm you will end up with the shared secret key (the final key that we are going to use to sign the request):

var sharedSecret = psha1(clientEntropy, serverEntropy, 256);

In the next post I will show how to sign the request to the service using this key… see you soon.

Categories: Blogs

Security Stack for Modern Apps talk at UTN: The video (Spanish)

Leandro Boffi - Mon, 2015-01-05 03:00

Last December 19 I was invited by the Argentine National Technological University (UTN) in Buenos Aires to speak about security architectures in modern apps.

On my talk I covered Token-based Authentication scenarios for Single Page and Mobile Apps, access delegation with OAuth 2.0 and Identity Federation with OpenId Connect. It was really fun and such an honour so I am very grateful to have been invited.

I want to share with you the videos of the talk, hoping you remember your spanish lessons!

Categories: Blogs

Google’s XSS Problem: It happens in the best of families

Leandro Boffi - Sun, 2014-12-28 22:34

A couple of days ago, this guy found an unbelievable XSS vulnerability on Google’s result page.

Basically when you add your site to Google index you can add some links that are shown as breadcrumbs in the result page and the user can click. In this post he shows how Google was not validating the input for those links, allowing you to write something like javascript:alert('hello!'), that executes on origin when the user clicks the link.

Of course Google fixed this issue pretty fast, but the funny thing is that it was there since breadcrumbs functionality was available.

The lesson that we should learn is that this kind of XSS are everywhere on the web, from the smallest to the biggest company and that is not enough with regular developers, all the companies must have security experts reviewing the code all the time.

So the question is… Are you checking your code for XSS vulnerabilities enough?

Categories: Blogs

Speaking at UTN: Security Stack for Modern Applications

Leandro Boffi - Mon, 2014-12-01 16:35

Next December 19 I will be closing the year speaking about Security Architectures for modern applications at Argentine National Technological University in Buenos Aires.


The National Technological University (Spanish: Universidad Tecnológica Nacional, UTN) is a country-wide national university in Argentina, and it’s considered among the top engineering schools in the country, so It is a great honour to be invited to speak there.

On my talk I will covering Token-based Authentication scenarios for Single Page and Mobile Apps, access delegation with OAuth 2.0 and Identity Federation with OpenId Connect, as well of having fun teaching how to hack bad implemented sites.

As part of the conference, named Summer.js, other very interesting talks will take place about topics like Angular.js, React, Phonegap, Cordova and Ionic Framework.

If you are on Buenos Aires at that time I hope to see you there! Sign-up here!

Categories: Blogs

do not version urls

Pablo Blog - Mon, 2014-10-20 18:21

Versioning the Web API URL is probably one of most common choice among developers. Well-known APIs such as Twitter, Github or Facebook use this approach, but it does not mean it’s the best way to do things. It presents some of the issues discussed below.

  • A new version number represents a new set of resources. If you have to create a new version to introduce a breaking change in one resource, that change expands to all the resources.

For example. You have two resources /orders and /customers. You need to introduce a new version to accommodate an schema change in orders. That implies adding a new version number in the URL for v1/orders and v1/customers. Although customers is still the same resource, it’s now referenced as a new resource v1/customers.

  • It’s hard to introduce backward compatibility changes. You might want to introduce improvements or changes that new clients can use without affecting existing ones. You can create a new version number for this, but it will represent some unnecessary overhead. Existing clients won’t be affected by the change so creating a new version does not seem to be right. Also, you will not want to keep the same version number as you will want clients to know which specific version they are targeting.

  • It does not go along with the idea of introducing incremental changes. A new version number usually represents a major release. If you want to make those changes public as they become available, you need a new version number. However, you won’t want to create v1, v1.1, v1.2 for the overhead discussed in #2.

A better approach for versioning.

Use an http header to specify version. If no http header is specified in the request message, stick to the latest version.

1 2 3 /orders accepts-version: 1.0 content-type: application/json

The “accepts-version” header represents the version the client can understand. If some changes were introduced in the resource representation that won’t affect the client, the service might be able to return it. Let’s say that you now have a new version 1.3 for /orders, which only contains backward compatibility changes. The server can return a header to inform that.

1 2 /orders version: 1.3

The client will know a new version exists, which is also compatible with 1.0 so it can optionally upgrade to it. This approach also works for fine for dynamic languages or schema-less types like json.

For embedded URLs or browser support, the http header can be replaced by an optional query string parameter ?accepts-version or ?v to make it shorter.

Categories: Blogs

Don't Inject Markup in A Web Page using Document.Write

Professional ASP.NET Blog - Tue, 2013-06-04 15:33
Look around just about every consumer facing site you visit these days has a third party script reference. Just about everyone uses Google Analytics and if you are like a former client of mine you have it and 2 other traffic analysis service scripts injected...(read more)
Categories: Blogs

Sending a Photo via SMS on Windows Phone

Professional ASP.NET Blog - Thu, 2013-05-30 03:01
Smartphones are awesome. They are the modern Swiss Army Knife because they do so much. One of the most important features in my opinion is taking photos. My Nokia Lumia has one of the best cameras available in a Smartphone and I like to use it all the...(read more)
Categories: Blogs

You Don't Need Windows To Test Your Web Site in Internet Explorer

Professional ASP.NET Blog - Wed, 2013-05-29 17:25
I know the majority of developers reading my Blogs are typically ASP.NET, enterprise developers. This means they develop on a Windows machine using Visual Studio most of the time. However in the broad market most modern web developers work on a MAC or...(read more)
Categories: Blogs

Using The New Git Support in WebMatrix 3

Professional ASP.NET Blog - Sun, 2013-05-26 15:19
WebMatrix is probably my favorite web development IDE because it is so simple and easy to use. Sure I use Visual Studio 2012 everyday and it has probably the best web development features available on the market. I also really dig Sublime. WebMatrix is...(read more)
Categories: Blogs

Publish to Directly To Azure Web Sites With WebMatrix

Professional ASP.NET Blog - Wed, 2013-05-01 20:39
WebMatrix is one of my favorite development tools because it really allows me to focus on what I love to do most, build modern web clients. It is a free Web IDE available from Microsoft and today they released version 3 for general availability . There...(read more)
Categories: Blogs

17000 Tweets in 365 Days - Not Too Many To Be Annoying

Professional ASP.NET Blog - Tue, 2013-04-30 14:29
What the heck was I thinking? Why did I do it? What did I learn? How did I do it? These are all things I have asked myself and others have asked me over the past year. It sounds like an odd labor to undertake and such an odd number. But yes I did 17,000...(read more)
Categories: Blogs

Introducing ToolbarJS - A HTML5 JavaScript Library to Implement the Windows Phone AppBar Functionality

Professional ASP.NET Blog - Sun, 2013-04-28 12:03
Back in February I released deeptissuejs , a HTML5, JavaScript touch gesture library. In January I release panoramajs a HTML5, JavaScript library to implement the basic Windows Phone panorama control experience. This month I am excited to release another...(read more)
Categories: Blogs

HTML5 and CSS3 Zebra Striping - Look Ma No JavaScript

Professional ASP.NET Blog - Mon, 2013-04-22 11:36
It was 5 maybe 6 years ago when I first started learning jQuery. One of the first things I did was order the jQuery In Action book . If you have read that book you should remember one of the first examples given, zebra striping a table. To me this example...(read more)
Categories: Blogs

Listen to Me Talk to Carl & Richard about the Surface Pro, Mobile Development and More

Professional ASP.NET Blog - Thu, 2013-04-18 11:53
A few weeks ago I got to sit down and chat with the DotNetRocks guys about a variety of topics. The initial premise for the interview was to talk about the Surface and why I love it so much. I think we got into some great tangents right from the start!...(read more)
Categories: Blogs

Why Its Time to Sunset jQuery

Professional ASP.NET Blog - Sun, 2013-04-14 14:15
I owe so much to John Resig and the jQuery team for creating such a wonderful framework. I have staked most of my recent career on jQuery the way I staked my career on ASP.NET back in 2001. I have built many applications using jQuery over the past five...(read more)
Categories: Blogs

The Good and Bad For - Helping it Scale With Web Performance Optimization

Professional ASP.NET Blog - Fri, 2013-04-12 13:30
BitCoin seems to be latest rage with wild value fluctuations. The past few days have seen a very wild roller coaster for the online currency. Most of the world's BitCoins are exchanged at , which has had some issues either with a denial of service...(read more)
Categories: Blogs

HTML5 Is Ready For the Big Time, Are You?

Professional ASP.NET Blog - Sun, 2013-04-07 02:11
Much has been said and 'debated' in recent years about the viability of HTML5. It should be obvious where I stand if you read my Blog or talk to me in person. HTML5, CSS3 and JavaScript are certainly ready and have been for a while. The big problem, as...(read more)
Categories: Blogs