Share What We Say

Filter by:


PSHA1 Algorithm for WS-Trust Server and Client Entropy Scenarios on Node.js

Leandro Boffi - Wed, 2014-07-23 13:34

I’ve just published a new Node.js module that implements the P_SHA1 algorithm as specified in TLS spec, that is used on WS-Trust spec in scenarios where the service you want to call requires client and server entropy. It has been tested with Microsoft CRM Dynamics and ADFS.

You can find the library here

A little context

As a security feature WS-Trust supports Proof-of-Possession Tokens. A proof-of-possession (POP) token is a security token that contains secret data that can be used to demonstrate authorized use of an associated security token, thereby the final service (relying party) can validate that the caller is the “real” owner of the token that he is presenting. Typically, although not exclusively, the POP token consist of a key known by the relying party.

WS-Trust specifies two ways of use proof-of-possession token keys: specific and partial.

When you use specific keys, the requestor can specify the key when he requests the token or the security token service can retrieve the key in the request security token response inside of the <wst:RequestedProofToken>. In both cases you just need to use the specific key to sign the requests you perform to the relying party (final service)

When you use partial keys, the final key, the key with which you will sign the requests you perform to the relying party must be calculated combining two keys: client key and server key.

In this scenario, also known as client and server entropy, when requesting the security token the client must specify a random key using the <wst:Entropy> element inside of the <RequestSecurityToken> structure, and the security token service must respond with another key, using the same element (<wst:Entropy>) inside of the <RequestSecurityTokenResponse> message. At the same time, the server will return a <wst:ComputedKey> element to indicate how the final key is computed.

<t:RequestSecurityTokenResponse xmlns:t=""> <t:Entropy> <t:BinarySecret>pSvudXqeURG8H0MrrKr2H+Q7nJ51WrcRJphoqcvGWu0=</t:BinarySecret> </t:Entropy> <t:RequestedProofToken> <t:ComputedKey></t:ComputedKey> </t:RequestedProofToken> </t:RequestSecurityTokenResponse>

While this can be extended, the default mechanism in WS-Trust 1.3 spec is the PSHA1 algorithm (defined in TLS spec, identified by this uri: (also is possible)

To resume, that means that both keys, client and server must be combined using the PSHA1 algoritm, and that is what this module implements.

How do I know if the service I want to consume requires client and server entropy?

That is easy, it is described in the WSDL of the service using WS-Policy, you will find something like this:

<sp:Trust13 xmlns:sp=""> <wsp:Policy> <sp:MustSupportIssuedTokens/> <sp:RequireClientEntropy/> <sp:RequireServerEntropy/> </wsp:Policy> </sp:Trust13>


The usage is very simple, you just need to provide client key, server key, and key size.

var psha1 = require('psha1'); var key = psha1('GS5olVevYdlK4/rP8=', 'LmF9Mjf9lYMHDx376jA=', 256);

In the next post I will show how to sign a request using this key.

Hope be useful!

Categories: Blogs

SelfHost Utilities

Pablo Blog - Wed, 2014-07-23 12:49

Self Hosting a Http server is a very common scenario these days with the push that Microsoft and the rest of the community are giving to Owin. One of the challenges you often find in this scenario is the ability to use HTTPS, and I can say by experience that it’s not something trivial. You have to run several commands, and usually generate a self signed certificate for SSL.

As part of the project where I was working on, we had to automate many of these steps in the installation process so we came up with a set of utilities classes that call the underline Win32 APIS for generate the certificate and also do the required registrations for the namespace and port. The process for doing this with these classes is pretty straigforward as it is shown below,

1 2 3 4 5 6 7 8 9 10 11 12 13 var cert = X509Util.CreateSelfSignedCertificate(Environment.MachineName); //Register a namespace reservation for everyone in localhost in port 9010 HttpServerApi.ModifyNamespaceReservation(new Uri("https://localhost:9010"), "everyone", HttpServerApiConfigurationAction.AddOrUpdate); //Register the SSL certificate for any address ( in the port 9010. HttpServerApi.ModifySslCertificateToAddressBinding("", 9010, cert.GetCertHash(), System.Security.Cryptography.X509Certificates.StoreName.My, HttpServerApiConfigurationAction.AddOrUpdate);

All the code is now available for you in github SelfHostUtilities.

Categories: Blogs

Graceful shutdown in node.js

Jose Romaniello Blog - Mon, 2014-07-21 08:42

According to wikipedia - Unix Signal:

Signals are a limited form of inter-process communication used in Unix, Unix-like, and other POSIX-compliant operating systems. A signal is an asynchronous notification sent to a process or to a specific thread within the same process in order to notify it of an event that occurred.

There are a bunch of generic signals, but I will focus on two:

  • SIGTERM is used to cause a program termination. It is a way to politely ask a program to terminate. The program can either handle this signal, clean up resources and then exit, or it can ignore the signal.
  • SIGKILL is used to cause inmediate termination. Unlike SIGTERM it can't be handled or ignored by the process.

Wherever and however you are deploying your node.js application it is very likely that the system in charge of running your app use these two signals:

  • Upstart: When stoping a service, by default it sends SIGTERM and waits 5 seconds, if the process is still running, it sends SIGKILL.
  • supervisord: When stoping a service, by default it sends SIGTERM and waits 10 seconds, if the process is still running, it sends SIGKILL.
  • runit: When stoping a service, by default it sends SIGTERM and waits 10 seconds, if the process is still running, it sends SIGKILL.
  • Heroku dynos shutdown: as described in this link heroku send SIGTERM, waits the process to exit for 10 seconds and if the process is still running it sends SIGKILL.
  • Docker: If you run your node app in a docker container, when running docker stop command the main process inside the container will receive SIGTERM, and after a grace period (10 seconds by default), SIGKILL.

So, let's try a with this simple node application:

var http = require('http'); var server = http.createServer(function (req, res) { setTimeout(function () { //simulate a long request res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }, 4000); }).listen(9090, function (err) { console.log('listening http://localhost:9090/'); console.log('pid is ' + });

As you can see response are delayed 4 seconds. The node documentation here says:

SIGTERM and SIGINT have default handlers on non-Windows platforms that resets the terminal mode before exiting with code 128 + signal number. If one of these signals has a listener installed, its default behaviour will be removed (node will no longer exit).

It is not clear from here what's the default behavior, I send SIGTERM in the middle of a request the request will fail as you can see here:

» curl http://localhost:9090 & » kill 23703 [2] 23832 curl: (52) Empty reply from server

Fortunately, the http server has a close method that stops the server for receiving new connections and calls the callback once it finished handling all requests. This method comes from the NET module, so is pretty handy for any type of tcp connections.

Now, if I modify the example to something like this:

var http = require('http'); var server = http.createServer(function (req, res) { setTimeout(function () { //simulate a long request res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }, 4000); }).listen(9090, function (err) { console.log('listening http://localhost:9090/'); console.log('pid is ' +; }); process.on('SIGTERM', function () { server.close(function () { process.exit(0); }); });

And then I use the same commands as above:

» curl http://localhost:9090 & » kill 23703 Hello World [1] + 24730 done curl http://localhost:9090

You will notice that the program doesn't exit until it finished processing and serving the last request. More interesting is the fact that after the SIGTERM signal it doesn't handle more requests:

» curl http://localhost:9090 & [1] 25072 » kill 25070 » curl http://localhost:9090 & [2] 25097 curl: (7) Failed connect to localhost:9090; Connection refused [2] + 25097 exit 7 curl http://localhost:9090 » Hello World [1] + 25072 done curl http://localhost:9090

Some examples in blogs and stackoverflow uses a timeout on SIGTERM in the case that server.close takes longer than expected. As mentioned above this is unnecesary because every process manager will send a SIGKILL if the SIGTERM takes too much time.

Categories: Blogs

HTTP/HTTPS debugging on Mobile Apps with Man In The Middle

Leandro Boffi - Tue, 2014-06-24 13:40

In this post I want to share with you an amazing tool called Man in the middle proxy. As you can imagine, this tool is an HTTP/HTTPS proxy that allow you to perform debug not only on HTTP communications but also on HTTPS/SSL calls.

Here you can see it in action!


I did the tests using an IPhone, but this method applies to any mobile or non mobile app or platform.

Installing MITMProxy

Installing mitmproxy is very easy, you just need to have Python installed and pip.

If you don’t have pip you can install it like this:

$ wget $ Python

Once you installed pip you just need to:

$ pip install mitmproxy

Running MITMProxy and configuring your IPhone

To start debugging your http/https apps follow next steps:

1) Configure in your iphone the IP of your machine as http proxy and port 8080 (default for MITMProxy).
2) Start mitmproxy in your machine.
3) Open Safari in Iphone and navigate to
4) Choose Apple icon and install the SSL Certificate for MITM.


That’s all, now you just need to start using your apps, and you will be able to see the traffic in your console.

Hope be useful!

Categories: Blogs

Windows Azure ACS Google Authentication Broken or “The difference between a serious cloud service and Windows Azure ACS”

Leandro Boffi - Fri, 2014-06-13 04:54

As you probably know, Google is migrating to Open Id Connect under the name of Google+ Sign-In, migration that I celebrate. As part of this process, they are deprecating a couple of endpoints and methods to authenticate.

As any serious cloud service, they have announced this migration long time ago, publishing an schedule that clearly specifies dates, features that will be deprecated and actions to take.

Last May 19, they closed the registration of new OpenId 2.0 clients, so existing clients will work until April 20, 2015 but you cannot register new clients.

Now, that is how a serious cloud service works, because when you provide a cloud service you must provide more than the service functionality, you must provide confidence and stability, having in mind that your customer’s systems will rely on you.

Now, as you know Windows Azure Access Control Service (now part of Windows Azure Active Directory) uses Google OpenId 2.0 as method to federate authentication with Google, and as you can imagine, they haven’t migrated to the new Google+ SignIn. That means that any ACS namespace that you have created after May 19 will have Google Authentication completely broken.

When you attempting to sign in you will see an error like this one:

Screen Shot 2014-06-13 at 1.41.43 AM

So, if you trusted on Windows Azure ACS, and your architecture requires to create ACS Namespaces (like a multi-tenant architecture for example) your systems will be broken.

It is really a pity, because I think that Windows Azure is a great platform, and it really surprised me coming from a serious company like Microsoft, but I think that I will think twice next time before trusting in a Windows Azure Service.

Categories: Blogs

CCS Injection: New vulnerability found on OpenSSL

Leandro Boffi - Fri, 2014-06-06 03:02

After the Heartbleed Bug a new critical vulnerably was found today on OpenSSL: CCS Injection.

This new vulnerability is based on the fact that OpenSSL accepts ChangeCipherSpec (CCS) inappropriately during a handshake (The ChangeCipherSpec message is used to change the encryption being used by the client and the server)

By exploiting this vulnerability an attacker could force SSL clients to use weak keys, allowing man-in-the-middle attack against encrypted communications.

Who is vulnerable?

The bug is present in all OpenSSL versions earlier than 0.9.8y, 1.0.0 to 1.0.0l and 1.0.1 to 1.0.1g.

In order to perform man-in-the-middle attack both server and client must be vulnerable. But attackers can still hijack authenticated sessions even if just the server is vulnerable.

Most mobile browsers (i.e. Firefox mobile, Safari mobile) are not vulnerable, because they do not use OpenSSL. Chrome on Android does use OpenSSL, and may be vulnerable.

Actions to take

To prevent this kind of attacks update your OpenSSL server to one of the non affected versions: 1.0.1h (recommended), 1.0.0m or 0.9.8za.

Unlike with Heartbleed private keys are not exposed, so you don’t need to regenerate them (at least you have transferred them via SSL/TLS).

For more information about vulnerability refer to this article.

Categories: Blogs

AppFabric OutputCaching

Pablo Blog - Fri, 2014-05-23 17:36

ASP.NET Web API does not provide any output caching capabilities out of the box other than the ones you would traditionally find in the ASP.NET caching module. Fortunately, Filip wrote a very nice library that you can use to decorate your Web API controller methods with an [OutputCaching] attribute, which is similar to the one you can find in ASP.NET MVC. This library provides a way to configure different persistence storages for the cached data, which uses memory by default. As part of this post, I will show how you can implement your own persistence provider for AppFabric in order to support distributed caching on web applications running on premises.

The first thing is to install AppFabric Caching. Wade Wegner wrote a very useful post describing all the required steps here.

Once AppFabric Cache is installed, you need to start the cluster and configure a new cache that will be used by our extension. Open up the Cache PowerShell console (Caching Administration Windows PowerShell in programs). Run the following command in the PowerShell console:


Create a new cache for our extension. Run the following command in the PowerShell console:

New-Cache OutputCache

At that point, we are ready to jump into the implementation of the extension. Before writing any code, we will use the AppFabric Caching client library, which is available as a Nuget package. You can find it in the repository under the name of “ServerAppFabric.Client”. The library written by Filip provides an extension point for the persistence providers called IApiOutputCache, so we will have to implement that interface.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 public class AppFabricCachingProvider : IApiOutputCache { readonly static DataCacheFactory Factory = new DataCacheFactory(); const string Region = "OutputCache"; readonly DataCache cache; public AppFabricCachingProvider(string cacheName) { this.cache = Factory.GetCache(cacheName); this.cache.CreateRegion(Region); } public void Add(string key, object o, DateTimeOffset expiration, string dependsOnKey = null) { var exp = expiration - DateTime.Now; if (dependsOnKey == null) { dependsOnKey = key; } cache.Put(key, o, new[] { new DataCacheTag(dependsOnKey) }, Region); } public bool Contains(string key) { var result = this.cache.Get(key, Region); if (result != null) return true; return false; } public object Get(string key) { var result = this.cache.Get(key, Region); return result; } public T Get<T>(string key) where T : class { var result = this.cache.Get(key, Region) as T; return result; } public void Remove(string key) { this.cache.Remove(key, Region); } public void RemoveStartsWith(string key) { var objs = this.cache.GetObjectsByTag(new DataCacheTag(key), Region); foreach(var o in objs) { this.cache.Remove(o.Key, Region); } } }

This implementation is pretty straightforward and uses the AppFabric client library for getting or storing data in the cache. All the data is stored as part of a region, which groups all the entries together to facilitate management. The name of the cache is passed as part of the constructor, so it will be provided at the moment of instantiating and configuring this extension.

The following code shows how the extension is configured,

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 var config = new HttpSelfHostConfiguration("http://localhost:999"); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var server = new HttpSelfHostServer(config); config.CacheOutputConfiguration().RegisterCacheOutputProvider(() => new AppFabricCachingProvider("OutputCache")); server.OpenAsync().Wait(); Console.ReadKey(); server.CloseAsync().Wait();

The name of the cache must be the same that we used before in the powershell console with the New-Cache command.

Finally, output caching can be configured in any existing controller using the [OutputCaching] attribute distributed with the Filip’s library.

1 2 3 4 5 6 7 8 public class TeamsController : ApiController { [CacheOutput(ClientTimeSpan = 50, ServerTimeSpan = 50)] public IEnumerable<Team> Get() { return Teams; }

The implementation of this provider is available as a separate project in GitHub.

Categories: Blogs

Cassandra – Setting up a cluster in EC2

ALopez Blog - Fri, 2014-05-09 03:05

This post is mainly a recopilation of different sources and what I came up with while creating my first Cassandra Cluster. Hope this helps.

Kudos to, DataStax and the book “Mastering Cassandra” (ISBN : 1782162682)


You must determine or perform the following before starting

1. Choose a name for the cluster.

2. Get the IP address of each node

3. Determine which nodes will be seed nodes. (Cassandra nodes use the seed node list for finding each

other and learning the topology of the ring.)

4. Determine the snitch.

5. If using multiple data centers, determine a naming convention for each data center and rack, for example: DC1, DC2 or 100, 200 and RAC1, RAC2 or R101, R102.

Base Instance Setup

1. Install an Amazon EC2 instance with an Amazon Linux AMI.

2. For the LAB purpose select a Medium instance

Firewall Configuration

Open the following firewall ports for the security group hosting the Cassandra cluster nodes

Public ports
Port number Description
22 SSH port
8888 OpsCenter website. The opscenterd daemon listens on this port for HTTP requests coming directly from the browser.


Cassandra inter-node ports
Port number Description
1024 - 65355 JMX reconnection/loopback ports. See description for port 7199.
7000 Cassandra inter-node cluster communication.
7001 Cassandra SSL inter-node cluster communication.
7199 Cassandra JMX monitoring port. After the initial handshake, the JMX protocol requires that the client reconnects on a randomly chosen port (1024+).
9160 Cassandra client port (Thrift).


Cassandra OpsCenter ports
Port number Description
61620 OpsCenter monitoring port. The opscenterd daemon listens on this port for TCP traffic coming from the agent.
61621 OpsCenter agent port. The agents listen on this port for SSL traffic initiated by OpsCenter.



Make sure the Linux instances are created and running, using Putty to open a terminal session

Connecting to Linux/Unix Instances from Windows Using PuTTY:


Cluster Installation – Steps

- Update the Amazon Linux

sudo yum update

- Locate the latest stable Cassandra version

- Download Cassandra


   NOTE: Identified the latest stable version for the corresponding URL

- Un-tar Cassandra and install Cassandra

tar -xzvf apache-cassandra-2.0.7-bin.tar.gz
sudo mv apache-cassandra-2.0.7 /opt

- Create Cassandra default data directory, cache directory and commit log directory

sudo mkdir -p /var/lib/cassandra/data
sudo mkdir -p /var/lib/cassandra/commitlog
sudo mkdir -p /var/lib/cassandra/saved_caches
sudo chown -R ec2-user.ec2-user /var/lib/cassandra

- Create Cassandra logging directory

sudo mkdir -p /var/log/cassandra
sudo chown -R ec2-user.ec2-user /var/log/cassandra

Edit Cassandra configuration

cd /

cd  /opt/apache-cassandra-2.0.7

vi conf/cassandra.yaml

Change the cluster name

cluster_name: ‘Global Dictionary’

NOTE: if the cluster is started at this point using an incorrect name, refer to the troubleshooting section.

Change the listening address for Cassandra & Thrift


NOTE: listen_address is for communication between nodes

Change the rpc address


NOTE: rpc_address is for client communication


At this point should be possible to install Cassandra locally on each node. Some posts mention to not do this, but worst case scenario is to recreated the data directories later on.

sudo  /opt/apache-cassandra-2.0.7/bin/cassandra -f


The listen address defines where the other nodes in the cluster should connect. So in a multi-node cluster it should to changed to it’s identical address of Ethernet interface.  The rpc address defines where the node is listening to clients. So it can be same as node IP address or set it to wildcard if we want to listen Thrift clients on all available interfaces. The seeds act as the communication points. When a new node joins the cluster it contact the seeds and get the information about the ring and basics of other nodes. So in multi-node, it needs to be changed to a routable address  as above which makes this node a seed. Note: In multi-node cluster, it is better to have multiple seeds. Though it doesn’t mean to have a single point of failure in using one node as a seed, it will make delays in spreading status message around the ring.  A list of nodes to be act as seeds can be defined as follows.

Install JNA

Required for production (performance).

l Download jna.jar from

l Add jna.jar $CASSADRA_HOME/lib

l vi /etc/security/limits.conf

$USER soft memlock unlimited
$USER hard memlock unlimited

Repeat the above steps for every node in the ring / cluster


Select a sub-set of ring nodes as seeds. Non-seed nodes contact the seed nodes to join the ring

Define at least one but preferably more for fault tolerance

Seeds are contacted when joining the ring,  no other communication with seeds is necessary afterwards

All nodes should have the same seed list

For each nodes, edit cassandra.yaml to add the Cassandra cluster seeds

seeds: “ip1, ip2”

To add a new seed, start the node as a non-seed node with auto_bootstrap to migrate the data first. Then turn auto_bootstrap off and make it to a seed node


Server clock must be synchronized with service like ntp. Otherwise, schema changes may be rejected as out dated

Install System Monitoring Tool

sudo yum -y install sysstat

Change the server timezone

cd /etc
sudo mv localtime
sudo ln -sf /usr/share/zoneinfo/US/Pacific localtime


Additional Steps for cluster that spans across networks

Start Cassandra (from seeds to non-seed nodes)


NOTE: AWS Reference about Regions and Availability Zones


1) Changing the Broadcast addresses to the public IP’s so the nodes can communicate

2) Changing the seed to the public IP address

3) Changing the snitch to EC2MultiRegion Snitch


broadcast_address: (Default: listen_address) If your Cassandra cluster is deployed across multiple Amazon EC2 regions and you use the EC2MultiRegionSnitch, set the broadcast_address to public IP address of the node and the listen_address to the private IP.

listen_address: (Default: localhost) The IP address or hostname that other Cassandra nodes use to connect to this node. If left unset, the hostname must resolve to the IP address of this node using/etc/hostname, /etc/hosts, or DNS. Do not specify

rpc_address: (Default: localhost) The listen address for client connections (Thrift remote procedure calls).

seed_provider: (Default: org.apache.cassandra.locator.SimpleSeedProvider) A list of comma-delimited hosts (IP addresses) to use as contact points when a node joins a cluster. Cassandra also uses this list to learn the topology of the ring. When running multiple nodes, you must change the - seeds list from the default value ( In multiple data-center clusters, the - seeds list should include at least one node from each data center (replication group)


Use the EC2MultiRegionSnitch for deployments on Amazon EC2 where the cluster spans multiple regions. As with the EC2Snitch, regions are treated as data centers and availability zones are treated as racks within a data center. For example, if a node is in us-east-1a, us-east is the data center name and 1a is the rack location.

You can also specify multiple data centers within an EC2 region using the dc_suffix property in the /etc/dse/cassandra/ file. For example, if node1 and node2 are in us-east-1:

Node Data center
1 dc_suffix=_A us-east-1_A
2 dc_suffix=_B us-east-1_B


This snitch uses public IPs as broadcast_address to allow cross-region connectivity. This means that you must configure each Cassandra node so that the listen_address is set to the private IP address of the node, and the broadcast_address is set to the public IP address of the node. This allows Cassandra nodes in one EC2 region to bind to nodes in another region, thus enabling multiple data center support. (For intra-region traffic, Cassandra switches to the private IP after establishing a connection.)

Additionally, you must set the addresses of the seed nodes in the cassandra.yaml file to that of the public IPs because private IPs are not routable between networks. For example:


To find the public IP address, run this command from each of the seed nodes in EC2:

curl http://instance-data/latest/meta-data/public-ipv4

Finally, be sure that the storage_port or ssl_storage_port is open on the public IP firewall.

When defining your keyspace strategy options, use the EC2 region name, such as “us-east“, as your data center names.

Smoke Test

Start Cassandra (from seeds to non-seed nodes)

cd /opt/apache-cassandra-2.0.7
bin/cassandra -f

To verify the status of the ring cluster after all Cassandra servers are started

bin/nodetool -h localhost ring
Address         Status State   Load            Owns    Token
163572425264069043502692069140600439631   Up     Normal  10.91 KB        70.70%     113716211212737963740265714504910561460   Up     Normal  6.54 KB         29.30%     163572425264069043502692069140600439631

To monitor the Cassandra log files

tail -f /var/log/cassandra/output.log
tail -f /var/log/cassandra/system.log

Starting up Cassandra

Cassandra Options are configured in


Cassandra environment options are configured in



For production system

l make a copy of as

l make changes to the copy

l start Cassandra as

CASSANDRA_INCLUDE=/path/to/ bin/cassandra


To start Cassandra as a non-demon process, use the “-f” option

bin/cassandra -f


To kill Cassandra with a script

l Record the process id to a file

cassandra -p /var/run/

l Kill the process

kill $(cat /var/run/

Fine Tuning

l Seeds

n Already used in previous steps. It is important to keep this value updated, so it helps node discovery.

n Define what are the nodes that will serve as seed to help other non seed nodes to discover the topology of the ring/

l Partitioner: The Murmur3Partitioner provides faster hashing and improved performance than the previous default partitioner (RandomPartitioner).

n Murmur3Partitioner: org.apache.cassandra.dht.Murmur3Partitioner

n RandomPartitioner: org.apache.cassandra.dht.RandomPartitioner

n ByteOrderedPartitioner: org.apache.cassandra.dht.ByteOrderedPartitioner

l Snitches

n Ec2Snitch: used  for one network, there is also another one that allows cluster running across different networks, it is used to put replicas on very next node of a ring.

n EC2MultiRegionSnitch: Use the EC2MultiRegionSnitch for deployments on Amazon EC2 where the cluster spans multiple regions. As with the EC2Snitch, regions are treated as data centers and availability zones are treated as racks within a data center. For example, if a node is in us-east-1a, us-east is the data center name and 1a is the rack location.




Scaling up

Refer to section Cassandra Cluster Installation. Seed and non seed nodes have the same procedure.


Scaling down

If you just shut the nodes down and rebalance cluster, you risk losing some data, that exist only on removed nodes and hasn’t replicated yet.

Safe cluster shrink can be easily done with nodetool. At first, run:

nodetool drain


on the node removed, to stop accepting writes and flush memtables, then:

nodetool decomission


to move node’s data to other nodes, and then shut the node down, and run on some other node:

nodetool removetoken


to remove the node from the cluster completely. The detailed documentation might be found here:

From my experience, I’d recommend to remove nodes one-by-one, not in batches. It takes more time, but much more safe in case of network outages or hardware failures.

Hardware requirements



Hard disk capacity

A rough disk space calculation of the user that will be stored in Cassandra involves adding up data stored in three data components on disk: commit logs, SSTable, index file, and bloom filter. When compared to the data that is incoming and the data on disk, you need to take account of the database overheads associated with each data. The data on disk can be about two times as large as raw data. Disk usage can be calculated using the following code:

# Size of one normal column

column_size (in bytes) = column_name_size + column_val_size + 15

# Size of an expiring or counter column

col_size (in bytes) = column_name_size + column_val_size + 23

# Size of a row

row_size (bytes) = size_of_all_columns + row_key_size + 23

# Primary index file size

index_size (bytes) = number_of_rows * (32 + mean_key_size)

# Additional space consumption due to replication

replication_overhead = total_data_size *

(replication_factor - 1)

Apart from this, the disk also faces high read-write during compaction. Compaction is the process that merges SSTables to improve search efficiency. The important thing about compaction is that it may, in the worst case, utilize as much space as occupied by user data. So, it is a good idea to have a large space left.


We’ll discuss this again, but it depends on the choice of compaction_strategy that is applied. For LeveledCompactionStrategy, having 10 percent space left is enough; for SizeTieredCompactionStrategy, it requires 50 percent free disk space in the worst case. Here are some rules of thumb with regard to disk choice and disk operations:

 Commit logs and datafiles on separate disks: Commit logs are updated on each write and are read-only for startups, which is rare. A data directory, on the other hand, is used to flush MemTables into SSTables, asynchronously; it is read through and written on during compaction; and most importantly, it might be looked up by a client to satisfy the consistency level. Having the two directories on the same disk may potentially cause a block to the client operation.

 RAID 0: Cassandra performs in built replication by means of a replication factor; so, it does not possess any sort of hardware redundancy. If one node dies completely, the data is available on other replica nodes, with no difference between the two. This is the reason that RAID 0 (http:// is the most preferred RAID level. Another reason is improved disk performance and extra space.

 Filesystem: If one has choices, XFS (XFS filesystem: http://en.wikipedia. org/wiki/XFS) is the most preferred filesystem for Cassandra deployment. XFS supports 16 TB on a 32-bit architecture, and a whopping 8 EiB (Exabibyte) on 64-bit machines. Due to the storage space limitations, the ext4, ext3, and ext2 filesystems (in that order) can be considered to be used for Cassandra.

 SCSI and SSD: With disks, the guideline is faster and better. SCSI is faster than SATA, and SSD is faster than SCSI. Solid State Drives (SSD) are extremely fast as there is no moving part. It is suggested to use rather low-priced consumer SSD for Cassandra, as enterprise-grade SSD has no particular benefit over it.

 No EBS on EC2: This is specific to Amazon Web Services (AWS) users. AWS’ Elastic Block Store (EBS: is strongly discouraged for the purpose of storing Cassandra data—either of data directories or commit log storage. Poor throughput and issues such as getting unusably slow, instead of cleanly dying, is a major roadblock of the network-attached storage.


Instead of using EBS, use ephemeral devices attached to the instance (also known as an instance store). Instance stores are fast and do not suffer any problems like EBS. Instance stores can be configured as RAID 0 to utilize them even further.



Larger memory boosts Cassandra performance from multiple aspects.

More memory can hold larger MemTables, which means that fresh data stays for a longer duration in memory and leads to fewer disk accesses for recent data. This also implies that there will be fewer flushes (less frequent disk IO) of MemTable to SSTable; and the SSTables will be larger and fewer.

This leads to improved read performance as lesser SSTables are needed to scan during a lookup. Larger RAM can accommodate larger row cache,

thus decreasing disk access.

For any sort of production setup, a RAM capacity less than 8 GB is not suggested. Memory above 16 GB is preferred


Cassandra is highly concurrent—compaction, writes, and getting results from multiple SSTables and creation of one single view to clients, and all are CPU intensive. It is suggested to use an 8-core CPU, but anything with a higher core will just be better.

For a cloud-based setup, a couple of things to keep in mind:

A provider that gives a CPU-bursting feature should be preferred. One such provider is Rackspace.

AWS Micro instances should be avoided for any serious work. There are many reasons for this. It comes with EBS storage and no option to use an instance store. But the deal-breaker issue is CPU throttling that makes it useless for Cassandra. If one performs a CPU-intensive task for 10 seconds or so, CPU usage gets restricted on micro instances. However, they may be good (cheap), if one just wants to get started with Cassandra.


Each node in the ring is responsible for a set of row keys. Nodes have a token assigned to them on startup either via bootstrapping during startup or by the configuration file. Each node stores keys from the last node’s token (excluded) to the current node’s token (included). So, the greater the number of nodes, the lesser the number of keys per node; the fewer the number of requests to be served by each node, the better the performance.

In general, a large number of nodes is good for Cassandra. It is a good idea to keep 300 GB to 500 GB disk space per node to start with, and to back calculate the number of nodes you may need for your data. One can always add more nodes and change tokens on each node.112 


As with any other distributed system, Cassandra is highly dependent on a network. Although Cassandra is tolerant to network partitioning, a reliable network with less outages are better preferred for the system—less repairs, less inconsistencies.

A congestion-free, high speed (Gigabit or higher), reliable network is pretty important as each read-write, replication, moving/draining node puts heavy load on a network.

System Configuration

Operating system configurations play a significant role in enhancing Cassandra performance. On a dedicated Cassandra server, resources must be tweaked to utilize the full potential of the machine.

Cassandra runs on a JVM, so it can be run on any system that has a JVM. It is recommended to use a Linux variant (CentOS, Ubuntu, Fedora, RHEL, and so on) for Cassandra’s production deployment. There are many reasons for this. Configuring system-level settings are easier. Most of the production servers rely on Linux-like systems for deployment. As of April 2013, 65 percent of servers use it. The best toolings are available on Linux: SSH and pSSH commands such as top, free, df, and ps to measure system performance, and excellent filesystems, for example ext4 and XFS. There are built-in mechanisms to watch the rolling log using tail, and there are excellent editors such as Vim and Emacs. And they’re all free!



l Cassandra: How to fix, ‘Fatal exception during initialization org.apache.cassandra.config. ConfigurationException: Saved cluster name Test Cluster != configured name…’?


l JNA not found. Native methods will be disabled.
Install jna-4.1.0.jar

l Can’t connect to cassandra - NoHostAvailableException

l Unable to gossip with any seeds

n Make sure the Listen Adress is consistent with Sedds in cassandra.yaml

l If you attempt to change the cluster name in cassandra.yaml after it was started once with a different name it will throw an error.

Saved cluster name [old name] != configured name [new name]


EDIT You can rename the cluster without deleting data by updating it’s name in the system.local table (but you have to do this for each node…)

cqlsh> UPDATE system.local SET cluster_name = ‘test’ where key=’local’;

# flush the sstables to persist the update.

bash $ ./nodetool flush

For the purposes of this LAB I simply removed and recreated the folder

cd /var/lib/cassandra

sudo rm -rf cassandra/


and refer to the process to create the folder structure again


l To find out current distro

[ec2-user@ip-172-31-1-109 /]$ cat /etc/*-release

Amazon Linux AMI release 2014.03

[ec2-user@ip-172-31-1-109 /]$

 Drivers to Connect to Cassandra


Other drivers available:




Categories: Blogs

Covert Redirect: Facebook and ESPN Security, oh my god…

Leandro Boffi - Sat, 2014-05-03 09:07

Yesterday a vulnerability was published under the name of Covert Redirect as a new security flag in OAuth 2.0 / OpenId.

In the article says:

Covert Redirect is an application that takes a parameter and redirects a user to the parameter value WITHOUT SUFFICIENT validation. This is often the of result of a website’s overconfidence in its partners. In another word, the Covert Redirect vulnerability exists because there is not sufficient validation of the redirected URLs that belong to the domain of the partners.

Two main validation methods that would lead to Covert Redirect Vulnerability:
(1) Validation using a matched domain-token pair
(2) Validation using a whitelist

Now, I have to say that it is not new, in fact really surprise me that this kind of attacks are still possible, and it is not an OAuth 2.0 / OpenId vulnerability, but it could be a problem of any poor implementation of OAuth 2.0, WSFederation, SAML-P or any other redirect and token based authentication method.

In this video, the publisher shows how an attacker could obtain a Facebook access token (Implicit Flow) or a Facebook Authorization Code (Authorization Code Flow) from a victim using an Open Redirector on the ESPN site.

Lets see how it works.

Facebook poor OAuth implementation

As OAuth 2.0 commands, Facebook gets the redirect url, the url where the token will be sent after the user authorization through the consent screen (I removed other OAuth parameters for better presentation):

It seems pretty obvious that this url MUST be validated, because other way, it would be pretty easy for an attacker to change this url and obtain the token from the victim, thats why you need to ask the clients to register their callback url.

In fact, if you look at the OAuth 2.0 Threat Model in the section Validation of pre-registered redirect_uri says:

An authorization server SHOULD require all clients to register their redirect_uri and the redirect_uri should be the full URI as defined in [I-D.ietf-oauth-v2]. The way this registration is performed is out of scope of this document. Every actual redirection URI sent with the respective client_id to the end-user authorization endpoint must match the registered redirection URI. Where it does not match, the authorization server must assume the inbound GET request has been sent by an attacker and refuse it. Note: the authorization server MUST NOT redirect the user agent back to the redirection URI of such an authorization request.

Also in the OpenId Connect spec on the section Authentication Request says:

REQUIRED. Redirection URI to which the response will be sent. This URI MUST exactly match one of the Redirection URI values for the Client pre-registered at the OpenID Provider.

Facebook allows you to register the callback uri (redirect_uri) but it seems that, ignoring the specs, to simplify things for the developers, they only validate the domain of the argument received on the redirect_uri parameter, allowing any subdomain or path. That seems to be enough until one their clients has an Open Redirect vulnerability.

ESPN Open Redirect Vulnerability

Quoting the Open Redirect definition:

An open redirect is an application that takes a parameter and redirects a user to the parameter value without any validation.

ESPN site has one of these on this endpoint:

It not only redirects to the parameter specified uri without any validation, it also sends the current query string parameters (pretty dangerous).

The Covert Redirect attack

As you can imagine, mixing the fact that Facebook only validates the domain and the open redirect vulnerability on the ESPN site you can do something like this (did’t use URL encoding for better presentation):
redirect_uri= &url=

Once you execute that URL, Facebook will show their consent screen saying that ESPN is asking for permission and the token generated by Facebook will be sent to the


Covert Redirect is nothing new, and it is not a vulnerability on OAuth nor OpenId. there is a lot written about the redirect_uri parameter and how to validate it properly.

Cover Redirect is a mix of a poor OAuth implementation (Facebook) and an open redirector (ESPN). So, if you have an Open Redirector endpoint on you site fix it. On the Facebook side, they refused to fix the flexible redirect_uri long time ago, so you shouldn’t expect something new.

Categories: Blogs

OAuth Proof of Possession draft are here!

Leandro Boffi - Mon, 2014-04-28 18:46

One of the concerns about OAuth 2.0 is that it uses bearer tokens, that are a kind of tokens that are not tied to any context at all.

That means that any party in possession of a token can get access to the associated resources, without any other demonstration.

This month, the IETF team has published a couple of new drafts to enhance OAuth security against token disclosure. The first one you need to look at is an overview of the OAuth 2.0 Proof-of-Possession (PoP) Security Architecture, then you have semantics for including PoP in JWT,
a method for key distribution and a method for signing http request.

Categories: Blogs

Releasing Astor: A developer tool for token-based authentication

Leandro Boffi - Sun, 2014-04-13 10:09

I’ve just published in NPM the first version of Astor. Astor is a command line developer tool that helps you when you work with token-based authentication systems.

At this moment, it allows you to issue tokens (right now it supports JWT and SWT formats) to tests your APIs, basically you can do something like this:

$ astor issue -issuer myissuer -profile admin -audience

The result of running that command will be something like this:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJodHRwOi8vc2NoZW1hcy54bWxzb2FwLm9yZy 93cy8yMDA1LzA1L2lkZW50aXR5L2NsYWltcy9uYW1lIjoiTGVhbkIiLCJhdWQiOiJodHRwOi8vc mVseWluZ3BhcnR5LmNvbS8iLCJpc3MiOiJodHRwOi8vbXlpc3N1ZXIuY29tLyIsImlhdCI6MTM5 NzM3NjU5MX0.d6Cb0IQsltocjOtLsfXhjseLcZpcNIWnHeIv4bqrCv4

Yes! a signed JWT ready to send to your api!

Astor basically works with a configuration file that saves issuers, user profiles and issueSessions configurations, that’s why you can say -issuer myissuer or -profile admin without specifing issuer key and user claims. To clarify, this is how astor.config looks:

{ "profiles": { "": { "": "Leandro Boffi", "": "" }, "admin": { "": "John Smith", "": "John Smith", "": "Administrator", } }, "issuers": { "contoso": { "name": "contoso", "privateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEow.... AKCAQEAwST\n-----END RSA PRIVATE KEY-----\n" }, "myissuer": { "name": "", "privateKey": "MIICDzCCAXygAwIBAgIQVWXAvbbQyI5BcFe0ssmeKTAJBg=" } } }

Did you get that? Once you have created the different profiles and issuers you can combine them very easily to have several tokens.

Off course you can start from scratch and specify the whole parameters in a single command without using the config file:

$ astor issue -n -l privateKey.key -a Create user profile... Here you have some common claimtypes, just in case: - Name: - Email: - Name Identifier: - User Principal: claim type (empty for finish): claim value: Leandro Boffi claim type (empty for finish): claim value: claim type (empty for finish): Would you like to save the profile? y Enter a name for saving the profile: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJodHRwOi8vc2NoZW1hcy54bWxzb2FwLm9yZy93cy8yMDA1LzA1L2 lkZW50aXR5L2NsYWltcy9lbWFpbCI6Im1lQGxlYW5kcm9iLmNvbSIsImh0dHA6Ly9zY2hlbWFzLnhtbHNvYXAub3JnL 3dzLzIwMDUvMDUvaWRlbnRpdHkvY2xhaW1zL25hbWUiOiJMZWFuZHJvIEJvZmZpIiwiYXVkIjoiaHR0cDovL3JlbHlp bmdwYXJ0eS5jb20vIiwiaXNzIjoiaHR0cDovL215aXNzdWVyLmNvbS8iLCJpYXQiOjEzOTczODMwMzR9.1vy9kyY26N wjOQ4gqfy5ZBIQgovgw0gxd4TcVXWzFok Would you like to save the session settings? y Enter session name: token-for-test

As you can see, if you don’t use an stored profile you will be prompt for creating the profile in the moment, and once you have created the profile you can save it on configuration for the future!

And finally, you can provide a name for the whole session, in the example token-for-test, so next time you have to use the same settings you can do:

$ astor issue -s token-for-test

How to install it?

$ npm install -g astor

Next steps?

I’ll be adding token validation functionality, together with other token formats like SAML and maybe authentication flows!

Check readme on github for detailed documentation:

Hope you found it useful!

Categories: Blogs

Don't Inject Markup in A Web Page using Document.Write

Professional ASP.NET Blog - Tue, 2013-06-04 15:33
Look around just about every consumer facing site you visit these days has a third party script reference. Just about everyone uses Google Analytics and if you are like a former client of mine you have it and 2 other traffic analysis service scripts injected...(read more)
Categories: Blogs

Sending a Photo via SMS on Windows Phone

Professional ASP.NET Blog - Thu, 2013-05-30 03:01
Smartphones are awesome. They are the modern Swiss Army Knife because they do so much. One of the most important features in my opinion is taking photos. My Nokia Lumia has one of the best cameras available in a Smartphone and I like to use it all the...(read more)
Categories: Blogs

You Don't Need Windows To Test Your Web Site in Internet Explorer

Professional ASP.NET Blog - Wed, 2013-05-29 17:25
I know the majority of developers reading my Blogs are typically ASP.NET, enterprise developers. This means they develop on a Windows machine using Visual Studio most of the time. However in the broad market most modern web developers work on a MAC or...(read more)
Categories: Blogs

Using The New Git Support in WebMatrix 3

Professional ASP.NET Blog - Sun, 2013-05-26 15:19
WebMatrix is probably my favorite web development IDE because it is so simple and easy to use. Sure I use Visual Studio 2012 everyday and it has probably the best web development features available on the market. I also really dig Sublime. WebMatrix is...(read more)
Categories: Blogs

Publish to Directly To Azure Web Sites With WebMatrix

Professional ASP.NET Blog - Wed, 2013-05-01 20:39
WebMatrix is one of my favorite development tools because it really allows me to focus on what I love to do most, build modern web clients. It is a free Web IDE available from Microsoft and today they released version 3 for general availability . There...(read more)
Categories: Blogs

17000 Tweets in 365 Days - Not Too Many To Be Annoying

Professional ASP.NET Blog - Tue, 2013-04-30 14:29
What the heck was I thinking? Why did I do it? What did I learn? How did I do it? These are all things I have asked myself and others have asked me over the past year. It sounds like an odd labor to undertake and such an odd number. But yes I did 17,000...(read more)
Categories: Blogs

Introducing ToolbarJS - A HTML5 JavaScript Library to Implement the Windows Phone AppBar Functionality

Professional ASP.NET Blog - Sun, 2013-04-28 12:03
Back in February I released deeptissuejs , a HTML5, JavaScript touch gesture library. In January I release panoramajs a HTML5, JavaScript library to implement the basic Windows Phone panorama control experience. This month I am excited to release another...(read more)
Categories: Blogs

HTML5 and CSS3 Zebra Striping - Look Ma No JavaScript

Professional ASP.NET Blog - Mon, 2013-04-22 11:36
It was 5 maybe 6 years ago when I first started learning jQuery. One of the first things I did was order the jQuery In Action book . If you have read that book you should remember one of the first examples given, zebra striping a table. To me this example...(read more)
Categories: Blogs

Listen to Me Talk to Carl & Richard about the Surface Pro, Mobile Development and More

Professional ASP.NET Blog - Thu, 2013-04-18 11:53
A few weeks ago I got to sit down and chat with the DotNetRocks guys about a variety of topics. The initial premise for the interview was to talk about the Surface and why I love it so much. I think we got into some great tangents right from the start!...(read more)
Categories: Blogs