KEEP IN TOUCH CALL US: 877 513 3118
Back in early 90’s in the US was illegal to export strong crypto code. Sending crypto code overseas was considered something similar to export weapons. At that time the
NSA and the US government banned people from selling software to other countries unless the code used involved encryption keys no longer than 512 bits.
The idea was to export weak encryption to the rest of the world to keep the stronger stuff at home.
Although this restrictions were removed long time ago, some
TLS/SSL implementations still support these 90’s ciphers.
Last Tuesday security researchers found a vulnerability on some
TLS/SSL implementations that allows an attacker to force clients and servers connecting over HTTPS to use these weakened encryption mechanisms, which the attacker can break to steal or manipulate sensitive data like session cookies or credentials.
Is this critical? Well, let me put it in this way: Microsoft just said that All windows versions are vulnerable. This means if you’re using Windows, an attacker on your network can potentially force Internet Explorer and other software using the Windows Secure Channel component to use weak encryption over the web and capture your credentials, cookie sessions or any other sensitive information.
If you want to know more about the attack, here is a very detailed analysis of it http://blog.cryptographyengineering.com/2015/03/attack-of-week-freak-or-factoring-nsa.html, but roughly, this is how it works:
1. In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
2. The MITM attacker changes this message to ask for ‘export RSA’.
3. The server responds with a 512-bit export RSA key, signed with its long-term key.
4. The client accepts this weak key due to the OpenSSL/SecureTransport bug.
5. The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
6. When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
7. From here on out, the attacker sees plaintext and can inject anything it wants.
FREAK (Factoring RSA Export Keys) attack is possible when both, client and server, are vulnerable.
From clients perspective seems that many major browsers in all major platforms are vulnerable. As I said before couple of hours ago Microsoft published this article saying that All versions of Windows are vulnerable with no patch yet. Also SecureTransport library used by the Safari web browser on iPhones, iPads and OS X Macs, and OpenSSL by Android browsers are also vulnerable implementations. Have in mind that this doesn’t only affects to browsers, but also native apps using vulnerable TLS libraries.
This is the list of known vulnerable browsers:
Internet Explorer (All versions)
Chrome on Mac OS (Patch available now)
Chrome on Android
Safari on Mac OS (Patch expected next week)
Safari on iOS (Patch expected next week)
Stock Android Browser
You can check if your client is vulnerable here: https://freakattack.com/clienttest.html
From servers, any server supporting
RSA_EXPORT cipher suites is potentially vulnerable. Almost any popular webserver Nginx, Apache, IIS, etc may be vulnerable depending of its configuration, so if you are running a webserver you must disable export ciphers asap.
Node.js v0.10.36, v0.12 and io.js are not vulnerable since they use a version of OpenSSL that is more recent and in which the vulnerability was fixed, unfortunately in older versions (0.10.35 and older, including all versions of v0.8.x) the TLS client uses default OpenSSL cipher suites, which makes them vulnerable (of course you can always limit the ciphers when creating the TLS createServer)
You can test your server using this online tool: https://tools.keycdn.com/freak
Today I just installed by the first time the
Gmail official client for
iOS and I was really surprised on how a company like
iPhone I decided to write this post to share with you why you must do the same as soon as possible and also show a very common dangerous practice in mobile apps.
The main security concern about the Gmail client for
iOS is that it uses a very dangerouse security practice, that unfortunately is very common in this days mobile apps: it opens unknown links in an embedded webview.
Basically when you receive a new email containing a link, when you click on the link it launches the link inside of the app, in the embedded web view, instead of launching a browser.
This is the worst thing that you can do, from security standpoint, in an app which basic functionality is to receive messages from other people (even strangers) and it seems the dream of phishing scammers, because when you use an embedded web view you don’t have any UI element protecting you from phishing: you don’t have any
TLS padlock validation icon nor address bar like you would have in a browser.
So while you are looking at, what you think is, your home banking login screen, you could be at
http://hacker.com and you don’t have any clue of that.
Please don’t do this at home, but it get worse if you add a bit of email spoofing to this thing, specially on
Apple devices, because, for some reason that I can’t explain, spoofing an
Apple email address is a very easy thing to do, due to they have on their DNS configuration the SPF record set to
~all instead of
SoftFail instead of
Fail. For this reason if an attacker spoofs any
@apple.com address the victim will not see any error in the gmail client, so he will think that it is a valid
So, to collect victim
iCloud credentials you just need to go to some online email spoofer and send some email like:
And that’s all, this is how the victim phone will looks like:
Notice that when reading the email there is no clue about that the email is a fake, it says
email@example.com and it doesn’t have any warning, at the same time, when we click the link, thanks to the webview, there is no clue that we are not in
https://apple.com, it only shows the window.title content, which you know can be set to anything.
Basically, this kind of attack would be imposible if the Gmail application had launched the link in a browser instead of using the embedded webview because the UI elements (padlock and address bar) would tell you where you are, and if you are using a
TLS/SSL against the right host.
So, What you think now? Will you uninstall the gmail app?
As a security feature
Proof-of-Possession Tokens. In this post I want to show you how you can consume a service that requires
PoP token security with client and server entropy (going deep in a min). This method has been tested with Microsoft Dynamics CRM and ADFS.
This is a very long topic, so I will split it into two posts. This, the first one, will show how to request the token and calculate the
PoP key (shared key), in the second one I will show how to sign the request using this key.
But, lets start with a little of context…
To start understanding a
PoP token we can compare it with its more common alternative: the
Bearer tokens are like cash, the mere fact of having one allows you to use it without any other step/verification. In the opposite way
PoP tokens are like credit card, the mere fact of possessing it is not enough, to use it you must prove that you are the real owner of it by using, for example, a valid id.
When we use
Bearer tokens the security of our applications rely on some transport encryption mechanism like
TLS/SSL, that is ok in some cases but there are some scenarios that require stronger security mechanisms.
PoP tokens add a security layer, because if someone captures one of your tokens he will still need to prove that he is the real owner of the token in order to use it on your application.
Well, as I said, rely on
TLS/SSL is ok in some cases but in some others you want stronger security features. The first problem of relying only on
TLS/SSL is that you are putting all the eggs in one basket and in any 101 security class you will learn that the main security principle is layering your security.
On the other hand,
TLS/SSL security depends on the client application validating the certificate, other way you are completely exposed to Man-in-the-middle attacks. That means that you are moving the entire security of your service to the client and you know what? developers and
TLS/SSL don’t like each other.
If you google for “How to handle SSL certificate error” the first thing that you will find is a lot of ways of bypassing this validation in any platform. For example,
Node.js just changed the default from not validating to validating the certificate in version
Now, before of going on how
WS-Trust implements this feature, let me share with you the good new: there is some work on progress to take
PoP tokens to
OAuth. You can read more about that here: OAuth Proof-of-Possession Drafts.
To start look at how we can use
PoP tokens on
WS-Trust lets look at the spec for a
A proof-of-possession (POP) token is a security token that contains secret data that can be used to demonstrate authorized use of an associated security token, thereby the final service (relying party) can validate that the caller is the “real” owner of the token that he is presenting. Typically, although not exclusively, the POP token consist of a key known by the relying party.
Now, you need to know that
WS-Trust specifies two ways of use
PoP token keys: specific and partial.
When you use specific keys, the client can specify the key when he requests the token or the security token service (ADFS for example) can retrieve the key to be used. In both cases you just need to use the specific key to sign the requests you perform to the relying party (final service).
The partial keys scenario is a bit different because you will use two keys, one for the server, and one for the client, and the final key, the key with which you will sign the requests you perform to the relying party must be calculated combining both keys.
In this post I will explain the partial scenario that is the more complex, but at the same time, the more secure scenario.
The first thing that you need to know if which type of keys does the service you want to call needs. You will find that described with
WS-Policy in the
WSDL file of the service:
In this example the service requires client and server entropy, that means partial keys.
Now we need to request a token, I’ve explained how to request a token in this post from any platform and language, but it will be basically a
SOAP call to the
WS-Trust endpoint of our STS (for example ADFS). In the body of our
SOAP envelope we will need to specify the
Request Security Token (A predefined WS-Trust format to request tokens), the key we want to use:
yEEN5hsRamzDqFKmNqvp+3d2yzGOU+czcEeEXVJJ4fA=, specified using the
Entropy tag will be our “Client Entropy”.
As you can see, we are also setting the
KeySize to 256, they
KeytType to symmetric key, and the
ComputedKeyAlgorithm to PSHA1, this is the algorithm that we are going to use to combine both keys. While it can be extended, the default algorithm is
PSHA1 specified in the TLS Spec.
You have to have in mind that all of this parameters are configurable and depend on the Relying Party configuration, so you should read all of them from
WSDL file of the service you want to call, inside of the
Now, after you send the request for the token, the server will reply something like this:<t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust"> <t:Entropy> <t:BinarySecret>TUv/+WgHQYY2nR3kqB/5/Zac117tkBf2CkxWvs4G2pA=</t:BinarySecret> </t:Entropy> <t:RequestedSecurityToken> <xenc:EncryptedData>...</xenc:EncryptedData> </t:RequestedSecurityToken> </t:RequestSecurityTokenResponse>
I removed a lot of code to improve presentation, but inside of the response you will find two important elements, the first one is the
Entropy as you can imagine, this is the “Server entropy”, and the second one is the
RequestedSecurityToken containing the SAML Assertion. Now, in this scenario, the SAML Assertion must be encrypted to ensure that only the Relying Party (the service you want to call) can decrypt it because it will contain the key with which you need to sign the requests to prove that you are the real owner of this assertion.
Remember that the whole idea of this scenario is that if somehow your token is leaked the possession of it shouldn’t be enough to authenticate on the relying party (the service), so it doesn’t make sense to send the key with which you will prove that you are the real owner of the token inside of the assertion if it is not encrypted.
Now we have the three elements that we need to call the service: the SAML Assertion (encrypted) and the Client and Server Entropies.
The first thing that we need to do is calculate the final key with which we are going to sign the requests. To do that we are going to use the
PSHA1 algorithm to combine both keys (client and server). As I said before, this algorithm is specified in the TLS Spec.
I’ve published a
Node.js module that implements the
PSHA1 algorithm, you can find it here: https://www.npmjs.org/package/psha1. But for you to have an idea this is the
In this example,
secret is the client key and
seed is the server key. After applying this algorithm you will end up with the shared secret key (the final key that we are going to use to sign the request):
In the next post I will show how to sign the request to the service using this key… see you soon.
Last December 19 I was invited by the Argentine National Technological University (UTN) in Buenos Aires to speak about security architectures in modern apps.
On my talk I covered
Token-based Authentication scenarios for Single Page and Mobile Apps, access delegation with
OAuth 2.0 and Identity Federation with
OpenId Connect. It was really fun and such an honour so I am very grateful to have been invited.
I want to share with you the videos of the talk, hoping you remember your spanish lessons!
A couple of days ago, this guy found an unbelievable
XSS vulnerability on Google’s result page.
Basically when you add your site to Google index you can add some links that are shown as breadcrumbs in the result page and the user can click. In this post he shows how Google was not validating the input for those links, allowing you to write something like
google.com origin when the user clicks the link.
Of course Google fixed this issue pretty fast, but the funny thing is that it was there since breadcrumbs functionality was available.
The lesson that we should learn is that this kind of
XSS are everywhere on the web, from the smallest to the biggest company and that is not enough with regular developers, all the companies must have security experts reviewing the code all the time.
So the question is… Are you checking your code for XSS vulnerabilities enough?
Next December 19 I will be closing the year speaking about Security Architectures for modern applications at Argentine National Technological University in Buenos Aires.
The National Technological University (Spanish: Universidad Tecnológica Nacional, UTN) is a country-wide national university in Argentina, and it’s considered among the top engineering schools in the country, so It is a great honour to be invited to speak there.
On my talk I will covering
Token-based Authentication scenarios for Single Page and Mobile Apps, access delegation with
OAuth 2.0 and Identity Federation with
OpenId Connect, as well of having fun teaching how to hack bad implemented sites.
If you are on Buenos Aires at that time I hope to see you there! Sign-up here!