Share What We Say

Filter by:


do not version urls

Pablo Blog - Mon, 2014-10-20 18:21

Versioning the Web API URL is probably one of most common choice among developers. Well-known APIs such as Twitter, Github or Facebook use this approach, but it does not mean it’s the best way to do things. It presents some of the issues discussed below.

  • A new version number represents a new set of resources. If you have to create a new version to introduce a breaking change in one resource, that change expands to all the resources.

For example. You have two resources /orders and /customers. You need to introduce a new version to accommodate an schema change in orders. That implies adding a new version number in the URL for v1/orders and v1/customers. Although customers is still the same resource, it’s now referenced as a new resource v1/customers.

  • It’s hard to introduce backward compatibility changes. You might want to introduce improvements or changes that new clients can use without affecting existing ones. You can create a new version number for this, but it will represent some unnecessary overhead. Existing clients won’t be affected by the change so creating a new version does not seem to be right. Also, you will not want to keep the same version number as you will want clients to know which specific version they are targeting.

  • It does not go along with the idea of introducing incremental changes. A new version number usually represents a major release. If you want to make those changes public as they become available, you need a new version number. However, you won’t want to create v1, v1.1, v1.2 for the overhead discussed in #2.

A better approach for versioning.

Use an http header to specify version. If no http header is specified in the request message, stick to the latest version.

1 2 3 /orders accepts-version: 1.0 content-type: application/json

The “accepts-version” header represents the version the client can understand. If some changes were introduced in the resource representation that won’t affect the client, the service might be able to return it. Let’s say that you now have a new version 1.3 for /orders, which only contains backward compatibility changes. The server can return a header to inform that.

1 2 /orders version: 1.3

The client will know a new version exists, which is also compatible with 1.0 so it can optionally upgrade to it. This approach also works for fine for dynamic languages or schema-less types like json.

For embedded URLs or browser support, the http header can be replaced by an optional query string parameter ?accepts-version or ?v to make it shorter.

Categories: Blogs

PSHA1 Algorithm for WS-Trust Server and Client Entropy Scenarios on Node.js

Leandro Boffi - Wed, 2014-07-23 13:34

I’ve just published a new Node.js module that implements the P_SHA1 algorithm as specified in TLS spec, that is used on WS-Trust spec in scenarios where the service you want to call requires client and server entropy. It has been tested with Microsoft CRM Dynamics and ADFS.

You can find the library here

A little context

As a security feature WS-Trust supports Proof-of-Possession Tokens. A proof-of-possession (POP) token is a security token that contains secret data that can be used to demonstrate authorized use of an associated security token, thereby the final service (relying party) can validate that the caller is the “real” owner of the token that he is presenting. Typically, although not exclusively, the POP token consist of a key known by the relying party.

WS-Trust specifies two ways of use proof-of-possession token keys: specific and partial.

When you use specific keys, the requestor can specify the key when he requests the token or the security token service can retrieve the key in the request security token response inside of the <wst:RequestedProofToken>. In both cases you just need to use the specific key to sign the requests you perform to the relying party (final service)

When you use partial keys, the final key, the key with which you will sign the requests you perform to the relying party must be calculated combining two keys: client key and server key.

In this scenario, also known as client and server entropy, when requesting the security token the client must specify a random key using the <wst:Entropy> element inside of the <RequestSecurityToken> structure, and the security token service must respond with another key, using the same element (<wst:Entropy>) inside of the <RequestSecurityTokenResponse> message. At the same time, the server will return a <wst:ComputedKey> element to indicate how the final key is computed.

<t:RequestSecurityTokenResponse xmlns:t=""> <t:Entropy> <t:BinarySecret>pSvudXqeURG8H0MrrKr2H+Q7nJ51WrcRJphoqcvGWu0=</t:BinarySecret> </t:Entropy> <t:RequestedProofToken> <t:ComputedKey></t:ComputedKey> </t:RequestedProofToken> </t:RequestSecurityTokenResponse>

While this can be extended, the default mechanism in WS-Trust 1.3 spec is the PSHA1 algorithm (defined in TLS spec, identified by this uri: (also is possible)

To resume, that means that both keys, client and server must be combined using the PSHA1 algoritm, and that is what this module implements.

How do I know if the service I want to consume requires client and server entropy?

That is easy, it is described in the WSDL of the service using WS-Policy, you will find something like this:

<sp:Trust13 xmlns:sp=""> <wsp:Policy> <sp:MustSupportIssuedTokens/> <sp:RequireClientEntropy/> <sp:RequireServerEntropy/> </wsp:Policy> </sp:Trust13>


The usage is very simple, you just need to provide client key, server key, and key size.

var psha1 = require('psha1'); var key = psha1('GS5olVevYdlK4/rP8=', 'LmF9Mjf9lYMHDx376jA=', 256);

In the next post I will show how to sign a request using this key.

Hope be useful!

Categories: Blogs

SelfHost Utilities

Pablo Blog - Wed, 2014-07-23 12:49

Self Hosting a Http server is a very common scenario these days with the push that Microsoft and the rest of the community are giving to Owin. One of the challenges you often find in this scenario is the ability to use HTTPS, and I can say by experience that it’s not something trivial. You have to run several commands, and usually generate a self signed certificate for SSL.

As part of the project where I was working on, we had to automate many of these steps in the installation process so we came up with a set of utilities classes that call the underline Win32 APIS for generate the certificate and also do the required registrations for the namespace and port. The process for doing this with these classes is pretty straigforward as it is shown below,

1 2 3 4 5 6 7 8 9 10 11 12 13 var cert = X509Util.CreateSelfSignedCertificate(Environment.MachineName); //Register a namespace reservation for everyone in localhost in port 9010 HttpServerApi.ModifyNamespaceReservation(new Uri("https://localhost:9010"), "everyone", HttpServerApiConfigurationAction.AddOrUpdate); //Register the SSL certificate for any address ( in the port 9010. HttpServerApi.ModifySslCertificateToAddressBinding("", 9010, cert.GetCertHash(), System.Security.Cryptography.X509Certificates.StoreName.My, HttpServerApiConfigurationAction.AddOrUpdate);

All the code is now available for you in github SelfHostUtilities.

Categories: Blogs

HTTP/HTTPS debugging on Mobile Apps with Man In The Middle

Leandro Boffi - Tue, 2014-06-24 13:40

In this post I want to share with you an amazing tool called Man in the middle proxy. As you can imagine, this tool is an HTTP/HTTPS proxy that allow you to perform debug not only on HTTP communications but also on HTTPS/SSL calls.

Here you can see it in action!


I did the tests using an IPhone, but this method applies to any mobile or non mobile app or platform.

Installing MITMProxy

Installing mitmproxy is very easy, you just need to have Python installed and pip.

If you don’t have pip you can install it like this:

$ wget $ Python

Once you installed pip you just need to:

$ pip install mitmproxy

Running MITMProxy and configuring your IPhone

To start debugging your http/https apps follow next steps:

1) Configure in your iphone the IP of your machine as http proxy and port 8080 (default for MITMProxy).
2) Start mitmproxy in your machine.
3) Open Safari in Iphone and navigate to
4) Choose Apple icon and install the SSL Certificate for MITM.


That’s all, now you just need to start using your apps, and you will be able to see the traffic in your console.

Hope be useful!

Categories: Blogs

Windows Azure ACS Google Authentication Broken or “The difference between a serious cloud service and Windows Azure ACS”

Leandro Boffi - Fri, 2014-06-13 04:54

As you probably know, Google is migrating to Open Id Connect under the name of Google+ Sign-In, migration that I celebrate. As part of this process, they are deprecating a couple of endpoints and methods to authenticate.

As any serious cloud service, they have announced this migration long time ago, publishing an schedule that clearly specifies dates, features that will be deprecated and actions to take.

Last May 19, they closed the registration of new OpenId 2.0 clients, so existing clients will work until April 20, 2015 but you cannot register new clients.

Now, that is how a serious cloud service works, because when you provide a cloud service you must provide more than the service functionality, you must provide confidence and stability, having in mind that your customer’s systems will rely on you.

Now, as you know Windows Azure Access Control Service (now part of Windows Azure Active Directory) uses Google OpenId 2.0 as method to federate authentication with Google, and as you can imagine, they haven’t migrated to the new Google+ SignIn. That means that any ACS namespace that you have created after May 19 will have Google Authentication completely broken.

When you attempting to sign in you will see an error like this one:

Screen Shot 2014-06-13 at 1.41.43 AM

So, if you trusted on Windows Azure ACS, and your architecture requires to create ACS Namespaces (like a multi-tenant architecture for example) your systems will be broken.

It is really a pity, because I think that Windows Azure is a great platform, and it really surprised me coming from a serious company like Microsoft, but I think that I will think twice next time before trusting in a Windows Azure Service.

Categories: Blogs

CCS Injection: New vulnerability found on OpenSSL

Leandro Boffi - Fri, 2014-06-06 03:02

After the Heartbleed Bug a new critical vulnerably was found today on OpenSSL: CCS Injection.

This new vulnerability is based on the fact that OpenSSL accepts ChangeCipherSpec (CCS) inappropriately during a handshake (The ChangeCipherSpec message is used to change the encryption being used by the client and the server)

By exploiting this vulnerability an attacker could force SSL clients to use weak keys, allowing man-in-the-middle attack against encrypted communications.

Who is vulnerable?

The bug is present in all OpenSSL versions earlier than 0.9.8y, 1.0.0 to 1.0.0l and 1.0.1 to 1.0.1g.

In order to perform man-in-the-middle attack both server and client must be vulnerable. But attackers can still hijack authenticated sessions even if just the server is vulnerable.

Most mobile browsers (i.e. Firefox mobile, Safari mobile) are not vulnerable, because they do not use OpenSSL. Chrome on Android does use OpenSSL, and may be vulnerable.

Actions to take

To prevent this kind of attacks update your OpenSSL server to one of the non affected versions: 1.0.1h (recommended), 1.0.0m or 0.9.8za.

Unlike with Heartbleed private keys are not exposed, so you don’t need to regenerate them (at least you have transferred them via SSL/TLS).

For more information about vulnerability refer to this article.

Categories: Blogs

Covert Redirect: Facebook and ESPN Security, oh my god…

Leandro Boffi - Sat, 2014-05-03 09:07

Yesterday a vulnerability was published under the name of Covert Redirect as a new security flag in OAuth 2.0 / OpenId.

In the article says:

Covert Redirect is an application that takes a parameter and redirects a user to the parameter value WITHOUT SUFFICIENT validation. This is often the of result of a website’s overconfidence in its partners. In another word, the Covert Redirect vulnerability exists because there is not sufficient validation of the redirected URLs that belong to the domain of the partners.

Two main validation methods that would lead to Covert Redirect Vulnerability:
(1) Validation using a matched domain-token pair
(2) Validation using a whitelist

Now, I have to say that it is not new, in fact really surprise me that this kind of attacks are still possible, and it is not an OAuth 2.0 / OpenId vulnerability, but it could be a problem of any poor implementation of OAuth 2.0, WSFederation, SAML-P or any other redirect and token based authentication method.

In this video, the publisher shows how an attacker could obtain a Facebook access token (Implicit Flow) or a Facebook Authorization Code (Authorization Code Flow) from a victim using an Open Redirector on the ESPN site.

Lets see how it works.

Facebook poor OAuth implementation

As OAuth 2.0 commands, Facebook gets the redirect url, the url where the token will be sent after the user authorization through the consent screen (I removed other OAuth parameters for better presentation):

It seems pretty obvious that this url MUST be validated, because other way, it would be pretty easy for an attacker to change this url and obtain the token from the victim, thats why you need to ask the clients to register their callback url.

In fact, if you look at the OAuth 2.0 Threat Model in the section Validation of pre-registered redirect_uri says:

An authorization server SHOULD require all clients to register their redirect_uri and the redirect_uri should be the full URI as defined in [I-D.ietf-oauth-v2]. The way this registration is performed is out of scope of this document. Every actual redirection URI sent with the respective client_id to the end-user authorization endpoint must match the registered redirection URI. Where it does not match, the authorization server must assume the inbound GET request has been sent by an attacker and refuse it. Note: the authorization server MUST NOT redirect the user agent back to the redirection URI of such an authorization request.

Also in the OpenId Connect spec on the section Authentication Request says:

REQUIRED. Redirection URI to which the response will be sent. This URI MUST exactly match one of the Redirection URI values for the Client pre-registered at the OpenID Provider.

Facebook allows you to register the callback uri (redirect_uri) but it seems that, ignoring the specs, to simplify things for the developers, they only validate the domain of the argument received on the redirect_uri parameter, allowing any subdomain or path. That seems to be enough until one their clients has an Open Redirect vulnerability.

ESPN Open Redirect Vulnerability

Quoting the Open Redirect definition:

An open redirect is an application that takes a parameter and redirects a user to the parameter value without any validation.

ESPN site has one of these on this endpoint:

It not only redirects to the parameter specified uri without any validation, it also sends the current query string parameters (pretty dangerous).

The Covert Redirect attack

As you can imagine, mixing the fact that Facebook only validates the domain and the open redirect vulnerability on the ESPN site you can do something like this (did’t use URL encoding for better presentation):
redirect_uri= &url=

Once you execute that URL, Facebook will show their consent screen saying that ESPN is asking for permission and the token generated by Facebook will be sent to the


Covert Redirect is nothing new, and it is not a vulnerability on OAuth nor OpenId. there is a lot written about the redirect_uri parameter and how to validate it properly.

Cover Redirect is a mix of a poor OAuth implementation (Facebook) and an open redirector (ESPN). So, if you have an Open Redirector endpoint on you site fix it. On the Facebook side, they refused to fix the flexible redirect_uri long time ago, so you shouldn’t expect something new.

Categories: Blogs

OAuth Proof of Possession draft are here!

Leandro Boffi - Mon, 2014-04-28 18:46

One of the concerns about OAuth 2.0 is that it uses bearer tokens, that are a kind of tokens that are not tied to any context at all.

That means that any party in possession of a token can get access to the associated resources, without any other demonstration.

This month, the IETF team has published a couple of new drafts to enhance OAuth security against token disclosure. The first one you need to look at is an overview of the OAuth 2.0 Proof-of-Possession (PoP) Security Architecture, then you have semantics for including PoP in JWT,
a method for key distribution and a method for signing http request.

Categories: Blogs

Releasing Astor: A developer tool for token-based authentication

Leandro Boffi - Sun, 2014-04-13 10:09

I’ve just published in NPM the first version of Astor. Astor is a command line developer tool that helps you when you work with token-based authentication systems.

At this moment, it allows you to issue tokens (right now it supports JWT and SWT formats) to tests your APIs, basically you can do something like this:

$ astor issue -issuer myissuer -profile admin -audience

The result of running that command will be something like this:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJodHRwOi8vc2NoZW1hcy54bWxzb2FwLm9yZy 93cy8yMDA1LzA1L2lkZW50aXR5L2NsYWltcy9uYW1lIjoiTGVhbkIiLCJhdWQiOiJodHRwOi8vc mVseWluZ3BhcnR5LmNvbS8iLCJpc3MiOiJodHRwOi8vbXlpc3N1ZXIuY29tLyIsImlhdCI6MTM5 NzM3NjU5MX0.d6Cb0IQsltocjOtLsfXhjseLcZpcNIWnHeIv4bqrCv4

Yes! a signed JWT ready to send to your api!

Astor basically works with a configuration file that saves issuers, user profiles and issueSessions configurations, that’s why you can say -issuer myissuer or -profile admin without specifing issuer key and user claims. To clarify, this is how astor.config looks:

{ "profiles": { "": { "": "Leandro Boffi", "": "" }, "admin": { "": "John Smith", "": "John Smith", "": "Administrator", } }, "issuers": { "contoso": { "name": "contoso", "privateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEow.... AKCAQEAwST\n-----END RSA PRIVATE KEY-----\n" }, "myissuer": { "name": "", "privateKey": "MIICDzCCAXygAwIBAgIQVWXAvbbQyI5BcFe0ssmeKTAJBg=" } } }

Did you get that? Once you have created the different profiles and issuers you can combine them very easily to have several tokens.

Off course you can start from scratch and specify the whole parameters in a single command without using the config file:

$ astor issue -n -l privateKey.key -a Create user profile... Here you have some common claimtypes, just in case: - Name: - Email: - Name Identifier: - User Principal: claim type (empty for finish): claim value: Leandro Boffi claim type (empty for finish): claim value: claim type (empty for finish): Would you like to save the profile? y Enter a name for saving the profile: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJodHRwOi8vc2NoZW1hcy54bWxzb2FwLm9yZy93cy8yMDA1LzA1L2 lkZW50aXR5L2NsYWltcy9lbWFpbCI6Im1lQGxlYW5kcm9iLmNvbSIsImh0dHA6Ly9zY2hlbWFzLnhtbHNvYXAub3JnL 3dzLzIwMDUvMDUvaWRlbnRpdHkvY2xhaW1zL25hbWUiOiJMZWFuZHJvIEJvZmZpIiwiYXVkIjoiaHR0cDovL3JlbHlp bmdwYXJ0eS5jb20vIiwiaXNzIjoiaHR0cDovL215aXNzdWVyLmNvbS8iLCJpYXQiOjEzOTczODMwMzR9.1vy9kyY26N wjOQ4gqfy5ZBIQgovgw0gxd4TcVXWzFok Would you like to save the session settings? y Enter session name: token-for-test

As you can see, if you don’t use an stored profile you will be prompt for creating the profile in the moment, and once you have created the profile you can save it on configuration for the future!

And finally, you can provide a name for the whole session, in the example token-for-test, so next time you have to use the same settings you can do:

$ astor issue -s token-for-test

How to install it?

$ npm install -g astor

Next steps?

I’ll be adding token validation functionality, together with other token formats like SAML and maybe authentication flows!

Check readme on github for detailed documentation:

Hope you found it useful!

Categories: Blogs

Don't Inject Markup in A Web Page using Document.Write

Professional ASP.NET Blog - Tue, 2013-06-04 15:33
Look around just about every consumer facing site you visit these days has a third party script reference. Just about everyone uses Google Analytics and if you are like a former client of mine you have it and 2 other traffic analysis service scripts injected...(read more)
Categories: Blogs

Sending a Photo via SMS on Windows Phone

Professional ASP.NET Blog - Thu, 2013-05-30 03:01
Smartphones are awesome. They are the modern Swiss Army Knife because they do so much. One of the most important features in my opinion is taking photos. My Nokia Lumia has one of the best cameras available in a Smartphone and I like to use it all the...(read more)
Categories: Blogs

You Don't Need Windows To Test Your Web Site in Internet Explorer

Professional ASP.NET Blog - Wed, 2013-05-29 17:25
I know the majority of developers reading my Blogs are typically ASP.NET, enterprise developers. This means they develop on a Windows machine using Visual Studio most of the time. However in the broad market most modern web developers work on a MAC or...(read more)
Categories: Blogs

Using The New Git Support in WebMatrix 3

Professional ASP.NET Blog - Sun, 2013-05-26 15:19
WebMatrix is probably my favorite web development IDE because it is so simple and easy to use. Sure I use Visual Studio 2012 everyday and it has probably the best web development features available on the market. I also really dig Sublime. WebMatrix is...(read more)
Categories: Blogs

Publish to Directly To Azure Web Sites With WebMatrix

Professional ASP.NET Blog - Wed, 2013-05-01 20:39
WebMatrix is one of my favorite development tools because it really allows me to focus on what I love to do most, build modern web clients. It is a free Web IDE available from Microsoft and today they released version 3 for general availability . There...(read more)
Categories: Blogs

17000 Tweets in 365 Days - Not Too Many To Be Annoying

Professional ASP.NET Blog - Tue, 2013-04-30 14:29
What the heck was I thinking? Why did I do it? What did I learn? How did I do it? These are all things I have asked myself and others have asked me over the past year. It sounds like an odd labor to undertake and such an odd number. But yes I did 17,000...(read more)
Categories: Blogs

Introducing ToolbarJS - A HTML5 JavaScript Library to Implement the Windows Phone AppBar Functionality

Professional ASP.NET Blog - Sun, 2013-04-28 12:03
Back in February I released deeptissuejs , a HTML5, JavaScript touch gesture library. In January I release panoramajs a HTML5, JavaScript library to implement the basic Windows Phone panorama control experience. This month I am excited to release another...(read more)
Categories: Blogs

HTML5 and CSS3 Zebra Striping - Look Ma No JavaScript

Professional ASP.NET Blog - Mon, 2013-04-22 11:36
It was 5 maybe 6 years ago when I first started learning jQuery. One of the first things I did was order the jQuery In Action book . If you have read that book you should remember one of the first examples given, zebra striping a table. To me this example...(read more)
Categories: Blogs

Listen to Me Talk to Carl & Richard about the Surface Pro, Mobile Development and More

Professional ASP.NET Blog - Thu, 2013-04-18 11:53
A few weeks ago I got to sit down and chat with the DotNetRocks guys about a variety of topics. The initial premise for the interview was to talk about the Surface and why I love it so much. I think we got into some great tangents right from the start!...(read more)
Categories: Blogs

Why Its Time to Sunset jQuery

Professional ASP.NET Blog - Sun, 2013-04-14 14:15
I owe so much to John Resig and the jQuery team for creating such a wonderful framework. I have staked most of my recent career on jQuery the way I staked my career on ASP.NET back in 2001. I have built many applications using jQuery over the past five...(read more)
Categories: Blogs

The Good and Bad For - Helping it Scale With Web Performance Optimization

Professional ASP.NET Blog - Fri, 2013-04-12 13:30
BitCoin seems to be latest rage with wild value fluctuations. The past few days have seen a very wild roller coaster for the online currency. Most of the world's BitCoins are exchanged at , which has had some issues either with a denial of service...(read more)
Categories: Blogs