Beware of TimeSpan.Parse

I am in the process of building a custom configuration section, and as a part of it, I need a TimeSpan property. I have struggled a bit with figuring out the format to use for TimeSpan in the string value in the config file, never getting 24 hours to be one day, no matter if I specify "00:24:00" or 24:00:00.

After a bit of debugging, I found out that TimeSpan.Parse uses some fuzzy logic to find out if you mean days or hours in the first part of the value. Notice the difference between parsing "23:00:00", which evaluates to 23 hours, and "24:00:00", which evaluates to 24 days.

? TimeSpan.Parse("23:00:00")
    Days: 0
    Hours: 23
    Milliseconds: 0
    Minutes: 0
    Seconds: 0
    Ticks: 828000000000
    TotalDays: 0.95833333333333326
    TotalHours: 23.0
    TotalMilliseconds: 82800000.0
    TotalMinutes: 1380.0
    TotalSeconds: 82800.0
? TimeSpan.Parse("24:00:00")
    Days: 24
    Hours: 0
    Milliseconds: 0
    Minutes: 0
    Seconds: 0
    Ticks: 20736000000000
    TotalDays: 24.0
    TotalHours: 576.0
    TotalMilliseconds: 2073600000.0
    TotalMinutes: 34560.0
    TotalSeconds: 2073600.0

So, if you actually want to specify 24 hours as a TimeSpan, you need to specify it as one day, with hours, minutes and seconds, like this: "1.00:00:00":

? TimeSpan.Parse("1.00:00:00")
    Days: 1
    Hours: 0
    Milliseconds: 0
    Minutes: 0
    Seconds: 0
    Ticks: 864000000000
    TotalDays: 1.0
    TotalHours: 24.0
    TotalMilliseconds: 86400000.0
    TotalMinutes: 1440.0
    TotalSeconds: 86400.0
Posted in .NET | Leave a comment

COM+ and System.Transactions living happily ever after

Recently I had an issue where I had to use a COM+ transactional component from a WCF service. I was porting some old functionality from another COM+ component to a WCF service, when I stumbled across this issue.

The new and shiny way to replace ServicedComponents, is of course to use the System.Transactions namespace. So, in good faith, I created a new using TransactionScope section, where I put the calls to two different, transactional COM+ components inside, and Completed the scope right before the end using statement, as is considered good practice (

However, we had some problems with a backend system used by one of the components, so the call to it kept failing all the time. However irritating a failing system might be, this time I was actually lucky the system was failing. Because then I got to test my transactional rollbacks. And, to my surprise, the rollbacks weren’t executed in component 2 when component 1 failed.

Then I started googling (as always – how could I ever do anything but Hello World’s without google…), and came across this article: Interoperability with Enterprise Services and COM+ Transactions . It explains that when you need to interoperate with COM+ transactions from a lightweight System.Transactions transaction, you need to specify the EnterpriseServicesInteropOption value to the TransactionScope contructor. The examples in the article above were all trying to get a pure .NET TransactionScope component to take part in an existing EnterpriseServices (COM+) transaction. After a bit of trial and error, I found that to actually initiate a COM+ transaction from pure .NET code, you need to specify the EnterpriseServicesInteropOption.Full option. This makes .NET always create a COM+ compatible transaction, and any COM+ component called within that scope will take part in the transaction created.

All things worked, and I was happy.

When writing this blog post, I came across the following articles, which do actually explain the scenario I was trying to get to work:

Here is an example from the link above:

 public void UpdateCustomerNameOperation(int customerID, string newCustomerName)
   // Create a transaction scope with full ES interop
      using (TransactionScope ts = new TransactionScope(
                     new TransactionOptions(),
         // Create an Enterprise Services component
         // Call UpdateCustomer method on an Enterprise Services 
         // component 

         // Call UpdateOtherCustomerData method on an Enterprise 
         // Services component 
      // Do UpdateAdditionalData on an non-Enterprise Services
      // component

So, COM+ EnterpriseServices transactions and System.Transactions do play well together, you just need to set the EnterpriseServicesInteropOption correctly when you instantiate your TransactionScope TransactionScope Constructor (Transaction, TimeSpan, EnterpriseServicesInteropOption).

Posted in .NET, C#, COM+ | Leave a comment

VB.NET developers – Please don’t name your properties the same as your classes

Today I was reminded that C# is a bit more strict in its naming conventions than VB.NET. After I converted an old project from VB.NET to C#, I spent a good part of the afternoon updating the names of properties that had been called the same as the class, which is obviously not allowed in C#. However, do we really need properties with the same name as the class? Maybe that’s a code smell?

Suggestion: Instead of making classes like this:

Public Class PhoneNumber
    Public String Prefix
    Public String PhoneNumber
End Class

Make them like this:

Public Class PhoneNumber
    Public String Prefix
    Public String Number
End Class

Then you can convert them to C# using any old tool for this (e.g. SharpDevelop), without having to haunt your source code looking for references to the now no-longer-valid property PhoneNumber.PhoneNumber.

Posted in .NET, C#, VB | Leave a comment

WIF SAML token POST and requestValidationMode=”2.0″

Just a quick note on the WIF SAML token POST and problems like this (you’ve probably had these problems too, if working with WIF and .NET 4.0):

A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo...").

(see e.g. Why am I getting the “A potentially dangerous Request.Form value was detected from the client” error?” on StackOverflow for an example of this).

There are numerous answers to this question, some good, and some not so good. Common to all of them, is that they either turn request validation off completely, by setting the <pages validateRequest="false"> attribute in web.config’s system.web section, or set request validation mode to 2.0 on the entire app.

There is, however, a more clever way. We only need to alter the requestValidationMode to 2.0 mode on the specific URL that WIF posts back the SAML token to. This can be done with a <location> element (see location Element (ASP.NET Settings Schema) for details) in your web.config, like this:

  <location path="WIFHandler">
      <httpRuntime requestValidationMode="2.0" />

The “WIFHandler” location does not need to exist in your app, as WIF will shortcut the pipeline before ASP.NET tries to handle the request, and redirect you to the return url (ru in the wctx parameter of the SAML token POST) instead.

In your WIF configuration section of the web.config file, be sure to match the “reply” parameter with the location where you set request validation mode to 2.0 mode:

        <wsFederation passiveRedirectEnabled="true" issuer="https://localhost/STS/" realm="https://localhost/MyApp/" reply="https://localhost/MyApp/WIFHandler/" />

Posted in .NET, C#, WIF | 5 Comments

Encrypt your WIF claims

WIF claims are per definition safe from tampering, as they are signed, and you do use SSL, don’t you? However, there might be times when you don’t want even the end user to be able to read the contents of your claims. This might be because you use claims to transport some semi-secret information, some top-secret information, or information you just don’t want the user to know you use as a means of controlling his access to content.

The information on how to encrypt your claims is available, however, the information is a bit scattered. Some information may be found here:

It was putting it all together that was the challenge for me, anyway. I have rolled my own STS, and following the code you get if you select “Add STS reference” in Visual Studio, and then “Create a new STS project in the current solution”, we set the encryption like this:

public class MySecurityTokenService : SecurityTokenService
    public MySecurityTokenService(SecurityTokenServiceConfiguration configuration)
        : base(configuration)
    protected override Scope GetScope( IClaimsPrincipal principal, RequestSecurityToken request )
        ValidateAppliesTo( request.AppliesTo );

        Scope scope = new Scope( request.AppliesTo.Uri.OriginalString, SecurityTokenServiceConfiguration.SigningCredentials );

        string encryptingCertificateName = WebConfigurationManager.AppSettings[ "EncryptingCertificateName" ];
        if ( !string.IsNullOrEmpty( encryptingCertificateName ) )
            // Important note on setting the encrypting credentials.
            // In a production deployment, you would need to select a certificate that is specific to the RP that is requesting the token.
            // You can examine the 'request' to obtain information to determine the certificate to use.
            scope.EncryptingCredentials = new X509EncryptingCredentials( CertificateUtil.GetCertificate( StoreName.My, StoreLocation.LocalMachine, encryptingCertificateName ) );
            // If there is no encryption certificate specified, the STS will not perform encryption.
            // This will succeed for tokens that are created without keys (BearerTokens) or asymmetric keys.  
            scope.TokenEncryptionRequired = false;            

        // Set the ReplyTo address for the WS-Federation passive protocol (wreply). This is the address to which responses will be directed. 
        // In this template, we have chosen to set this to the AppliesToAddress.
        scope.ReplyToAddress = scope.AppliesToAddress;

        return scope;


The key is setting the TokenEncryptionRequired and the EncryptionCredentials on the scope.

Note also the comment in the template code, on using different certificates per RP in production code. You don’t want anyone but the RP you send the claims to decrypting the claims.
You would normally put this certificate in the “Other people” (aka “AddressBook”) certificate store, and you don’t need the private key for this certificate on the STS server(s).

Then, on the relying party (RP), you have to tell WIF where to find the certificate to use for decrypting the claims. The RP needs to have the private key of the certificate, to be able to decrypt the claims encrypted with its public key. It makes sense to put it in the “Personal” (aka “My”) certificate store.

This is done purely in configuration, in the microsoft.IdentityModel/service section of Web.config. Just insert an element like this:



      The rest of you WIF config goes here...


                    findValue="<thumbprint of the certificate used for encryption>"

Then you get the following in your claims set, instead of plain-text claims:

      <xenc:EncryptedData Type="" xmlns:xenc="">
        <xenc:EncryptionMethod Algorithm="" />
        <KeyInfo xmlns="">
          <e:EncryptedKey xmlns:e="">
            <e:EncryptionMethod Algorithm="">
              <DigestMethod Algorithm="" />
              <o:SecurityTokenReference xmlns:o="">
                    <X509IssuerName>CN=MyCA, OU=MyOrg, O=MyComp, S=MyPlace, C=MyCountry</X509IssuerName>
                    <X509SerialNumber>***SERIAL NUMBER OF CERTIFICATE***</X509SerialNumber>

That should be it. Now your claims aren’t available for anyone to look at.

Posted in .NET, C#, WIF | 6 Comments

Reminder: Your publics are public.

There has been quite a lot of activiy in the blogosphere in the last days, following github’s Mass Assignment Vulnerability hit. I have spent parts of the day looking into this vulnerability, and some of the suggestions for mitigation.

What is the issue?

Ruby on Rails, which github is built upon, has a feature known as Mass Assignment, which automagically maps your http request parameters to object properties. The same feature is available in .NET MVC. The feature and the problematic side of the feature is described in detail here:

In short, the problem is, that if you have an action with the following signature, which is intended to update the address of the person:

public ActionResult UpdateAddress(Person person)

which is called from a view, and Person is defined like this:

public class User
  public string Name { get; set; }
  public string Address { get; set; }
  public bool IsAdmin { get; set; }

even though you just just (mean to) expose the Address in the edit view (via a text field); if someone supplied a request parameter or a POST variable with the same name as one of the properties, the object is automatically populated with the values from the request/form parameters even though this was not your intention.

This has been described as a vulnerability in .NET MVC. And lots of different mitigation strategies are suggested in the links above, among others. However, I think all the suggestions overlook a suttle, but important feature of MVC:

  • If you make something public, it is really public

This applies to your action methods, which, if they are public, can be accessed to a URL with the controller and action names. It also applies to your models if you expose them in your views and controllers. When you think about it, it makes sense. If you declare something as public, you are fine with other parts of the system updating it. The special case with ASP .NET MVC is that this “other part of the system” could also be the user’s browser.

As you might have guessed, this leads us to the maybe simplest solution to the “vulnerability” (which is in fact, when you think it over, a feature rather than a vulnerability):

Make your view model’s setters internal or private:

public class User
  public string Name { get; internal set; }
  public string Address { get; set; }
  public bool IsAdmin { get; internal set; }

Simple. This prevents UpdateModel from automatically assigning any other variables than Address, which has a public setter, from the request parameters, because it respects the visibility of your model’s methods.

Posted in .NET, C#, MVC | Tagged , , , , , , , | Leave a comment

Find .NET runtime versions in Powershell

We are using multiple versions of the .NET framework at a client’s, in a huge enterprise application.
We had an issue today, where someone had introduced a dependency on .NET 4.0 too low down in the stack, so that projects in solutions building after it, would fail with the following error:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1360,9): warning MSB3258: The primary reference "OurMiddleTierComponent" could not be resolved because it has an indirect dependency on the .NET Framework assembly "mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" which has a higher version "" than the version "" in the current target framework. [D:\src\path\projectFile.vbproj]

In the process of trying to find out why it fails (using Reflector, nDepend, etc), I put together the following Powershell snippet to help us investigate which DLLs are 4.0 and which are 2.0.

It requires that you run Powershell in .NET 4.0 if you have any .net 4.0 assemblies in there.

dir -r -i *.dll | % { $version = [System.Reflection.AssemblyName]::GetAssemblyName($_).Version; echo "$($_.Name);$version" } 2> $null | sort > dll-versions.txt

This makes a sorted, semicolon-separated list of dll names and its corresponding runtime version.
The redirect of stderr to $null is to handle the case that you could have the same DLL in multiple folders, then .NET will give an error when you try to load it again.

And then you can use your tool of choice (Excel or whatever) to further investigate.

Posted in Uncategorized | Leave a comment