Thursday, September 22, 2016

Asymmetrical Signing with Google Macaroons

I'm a big fan of Macaroons but it's always bugged me a bit that if you want separate issuing/consuming services you need a shared secret between them.  Main reason I didn't like this is that the consuming service then had the power to create new macaroons with different permissions. Of course the consuming service doesn't exactly need permission to access/modify its own resources so it's really only an issue if you don't use different shared secrets for each service.  It also makes it a rather big pain to rotate secrets as you have to update it in multiple places.

One of the things I really like about signed JWTs is that anyone can safely verify the token without accidentally allowing people to create new ones.  So i've been pondering the idea of public private keys and macaroons for a while trying to figure out if they can work together nicely.

Not sure how useful this is, but I haven't seen anyone else write about this topic (which could be a bad sign for security) so thought i'd do a brain dump.

Disclaimer: The following was a shower thought and has not been particularly well explored so it might not be particularly secure (I'm a security enthusiast but not an expert,  I have revised this post at least once fixing a couple of silly mistakes.)

Standard Method = Token + Secret => Hash then send the Token + Hash to relying party

To verify the token you need to know the Secret.

Adding PPK is really quite simple, you simply encrypt the secret + hash used as part of the macaroon signature using your private key and include the result as part of the token. This means anyone with the public key can verify the tokens authenticity and then that the chain afterwards is valid.

Potential PPK Method = Token + Secret => Hash then send Token + Hash + Encrypted Secret to relying party  - T + S => H, send T:H:E(S)

To verify the token you first obtain the secret by decrypting it, then follow the standard Macaroon verification.

Chaining after this point works the same as previously, the old hash is used as the secret for the new token.  T + T1 + OldHash => New Hash, then send T + T1 + NewHash + Original Encrypted Secret.

However, this isn't quite enough, anyone with the decryption key could change the token, create their own hash with the secret they decrypted and pass on the old encrypted secret with the message. For this reason the original hash also needs to be included in the encryption for verification.  T:H:E(SH) or T:T1:H1:E(SH) for a chained message.  It also needs to be included decrypted so that chaining works (can't chain without the previous hash).

It does mean anyone with the public key can remove constraints that were added after the original server by rebuilding the entire chain without the offending parts, so you still don't want to give the public key to just anyone. But that's a limitation with Macaroons in the first place, the secret key is secret for a reason.

I believe it still allows constraints to be added to the token by anyone it passes through, which still makes it more powerful than standard JWT PPK verification.

What it appears to allow is the secret to be rotated or even randomized completely, that might not be a major benefit when you consider you have a new secret (being the 'public' decryption key) which is still hard to rotate as it's not really public (and could introduce problems due to people treating it too publically) but actually just asymmetrical encryption with different secrets at the issuer / consumer.

So overall I think it's an improvement over standard Macaroons but still doesn't solve everything I was hoping for.

  • Consuming services can't create new tokens
  • Still can't easily rotate keys
But maybe something like the ratcheting from Whisper could be used to improve that (https://whispersystems.org/blog/advanced-ratcheting/)

Dynamic Navigation via Claims-based Dependency Injection (website composition)

It's not uncommon for websites to show/hide navigation based on what roles a user has.  This become more problematic when the links being shown/hidden are links to other websites.  Each site then needs to know about the others and any time you deploy a new site as part of the suite you need to update each app to add it, and maintain the logic based on roles for which items to show hide.  A solution i've seen for this is to create a webservice and ask it what nav you should be showing, I'm not a huge fan of pull based services so this has always bugged me.

I am however a big fan of Dependency Injection and while this issue isn't exactly all that related, it occurred to me that your login portal (WS-Fed, OAuth or whatever) can kind of be used like a DI Container to inject things into dependent apps.  The change is extremely simple, instead of the IdP (just) telling apps what roles the user has, they also pass through a list of navigation items to display for that user.  The nav then becomes a fairly dumb component that just reads the claims passed in and shows them.  Thus centralising all the logic around what to show to what user, and considering the IdP normally has a list of relying parties anyway it's not exactly new information to it.  Of course it doesn't really work for internal links or links to systems not controlled by the IdP.

This is great when different roles have a different subset of apps they have access to, you can use this technique to create modular user portals that reuse common functionality/apps but still make it feel like a single portal (as long as look and feel is common).  Where it becomes really neat is when you start looking into beta testing / A/B testing / slow rollouts where you can replace entire applications for specific user groups with a newer version instead of a big bang approach.  Again not really suitable for in app features (especially if you have multiple different toggles) but for that you can fall back to roles.

Tuesday, March 17, 2015

Simple Tfs Build Monitor

I wanted a nice easy way to tell if there's a broken build and didn't find any of the build monitor tools available obvious enough while still being unobtrusive.  So I wrote a little console app that changes the wallpaper colour which shows through the taskbar. I then scheduled the application to execute every 15 minutes. It's not at a level that I could roll out to the entire company (changing wallpaper is a tad mean) but it's a good proof of concept


Thursday, January 30, 2014

Building Xamarin Applications on a TFS Build Server

Enabling Android Builds


Update: Xamarin have done a bunch of work which makes it much easier to integrate with TFS, and this will only get easier with Build vNext as it uses actual visual studio to build instead of just msbuild

  • Install the android SDK to a common path
  • Edit the TFS Build definition and in advanced settings find the "MSBuild Arguments" property and set it to "/p:AndroidSdkDirectory=E:\Android\android-sdk" obviously with the correct path
  • Edit the *.csproj file for the android project and change the first line so that DefaultTargets="Build" become DefaultTargets="SignAndroidPackage".
  • Queue a build in tfs - the APK should be put in the drop folder along with the other binaries.


Enabling iOS Builds

  • Open up the iPhone *.csproj file in a text editor, change the first line so that DefaultTargets="Build" become DefaultTargets="Build;_TfsRemoteBuild". And add a conditional propertygroup and targets
<PropertyGroup Condition=" '$(BuildingInsideVisualStudio)'!='true' ">
          <ServerAddress>Name or IP of Mac Build Server</ServerAddress>
          <BSAT>NjY3MjY5</BSAT>
         <HttpPort>5000</HttpPort>
  </PropertyGroup>
  <Target Name="_TfsRemoteBuild" Condition=" '$(BuildingInsideVisualStudio)'!='true' " DependsOnTargets="_RemoteBuild">

BSAT is the base64 encoded Pin you used to connect to the iOS build agent from Visual Studio



The server committed a protocol violation. Section=ResponseStatusLine
Unknown solution.

http://forums.xamarin.com/discussion/3696/how-to-build-from-msbuild-command-line-trig-build-on-osx-from-windows

Failed to fetch manifest from the build server

  • Ensure the build server and the Mac build server are running the same version of Xamarin 

Other Tips

  • Remember to use build server tags to mark any build controller you setup to build Xamarin builds so that you can filter by just compatible build servers when creating a build definition
  • You will need to login to your build server setup your Xamarin license in visual studio - the licenses are stored in a common place so it shouldn't matter who you login as to set this up.

Friday, September 13, 2013

Automatically create NuGet packages for TFS/SharePoint dlls

I write a number of TFS Plugins, meaning I end up referencing all sorts of TFS assemblies. This entails trawling through the various client and server installation directories trying to find the particular dll I'm looking for.

As I like my projects to build without needing to install any extra dependencies manually, this means either I include the files when I check into source control, or more recently I manually create a NuGet package for each dll and put it in my private repository.

Since I also have a habit of updating to the latest version of TFS often I also find that I need to repeat this process quite often - finding all these assemblies, creating NuGet packages, then updating my plugins and redeploying

I finally got sick of it enough to write a small powershell script to find any assembly starting with Microsoft.TeamFoundationServer.* and create packages, including package references to other packages.

This script should also work for other products like SharePoint with a few tweaks

Sunday, June 23, 2013

Using Mvc 4 Display Modes for Feature Toggles

It occurred to me that it's possible to use the new Mvc 4 feature Display Mode (meant for switching views for mobile versions) as a complement for Feature Toggles.

Obviously this can only switch out an entire view (or partial view) but it can be a nice alternative to putting lots of @if(featureEnabled) {} tags through your view.

Add the following code to your global.asax Application_Start (or create an App_Start module).

 DisplayModeProvider.Instance.Modes.Insert(0, new DefaultDisplayMode("{NameOfFeature}")
{
     ContextCondition = (context => {true/false logic})
});

Then create 2 views - one for the feature being disabled and one for it being enabled - eg Index.cshtml and Index.{NameOfFeature}.cshtml.

This could also be used for similar concepts like permissions and a/b testing.

Thursday, June 20, 2013

Tfs Extensibility - Automatically Create Code Reviews on Checkin

I created a small plugin that has a percentage chance to create code review requests on checkin. You could enhance this pretty easily to create reviews using a more complex condition (based on how long its been since they last had a review, or the size of the checkin etc) however I've found the 5% rule to be fairly successful mainly because people have gotten used to the review feature and have started requesting code reviews manually for things they feel are important.

The major gotcha I found while coding this was the need to impersonate the user performing the checkin as the creator is the only user who can close the review (which isn't very useful when its the service account).

Friday, February 15, 2013

Tfs Extensibility - Filtering Lab Management Test Agents by Test Configuration

Original Problem
As previously mentioned there's no built-in way to pick a test agent automatically based on the selected test configuration, after much digging I have found a partial work around.

Theory
It turns out that like a lot of tfs components the test controller supports plugins.  These inherit from TcmRunControllerPlugin and have 3 methods - Init, TestRunStarting and TestRunCompleted.  Unlike other TFS Plugins they are not automatically detected and need to be manually added to the qtcontroller.config (in c:\program files (x86)\microsoft visual studio 11.0\common7\ide).

I found that on the TestRunConfiguration object there is a StringDictionary called AgentProperties, upon further inspection I noticed that it appeared to be used to filter test agents to find a suitable one to run the tests on.  By default it has a filter for environment name, as a small test I added another hardcoded filter by overriding the Init method in a very simple plugin.  To my surprise whenever tests were run they would now complain that no suitable agent could be found, by changing my filter it quickly came apparent that this method worked for machine tags.

By overriding the Init method and a bit more digging I managed to get a list of properties from the Test Configurations and put them as filters for the agent selected.

Limitations/Issues
I created a Test Configuration with values "IE:9" and another with "IE:10" and respectively labeled the web client machines with the same tags.  Running the IE9 tests from MTM would always pick the machine tagged IE:9 and the IE10 ones would pick the machine tagged IE:10.  Running both together would look for a machine labelled IE:9,10 which it wouldn't find and would fail.  Unfortunately I have no way around this limitation currently.

When running coded ui tests via a lab build you can only select one Test Configuration anyway so this limitation is only a problem when manually running the tests in MTM.

The test controller plugins don't appear to be documented, which may mean this breaks in a future update.

Cross Browser Support
This idea can be extended to apply to other browsers too using the cross browser feature from update 1 simply create an extra machine for each browser you wish to test and label correctly.  I'd probably change my labelling scheme from IE:9 to Browser:IE9 to be bit more consistant.  On each machine add a system environment variable called browser set to the correct name, then in your coded UI test make sure you always grab this value and set BrowserWindow.CurrentBrowser to its value.

Advantages over other methods
  • Tests can be run from MTM without needing to manually select a different environment to run it in (as long as you only run one Test Configuration at a time)
  • Machines other than the test agents can be reused (eg database/webserver) meaning less hyper-v resources are needed
  • Works fine with TFS2012's auto config of test clients which causes issues for the multiple test client role method

Potential Extensions (not sure if they're possible)
  • A way to disable/enable this filtering (an extra property in a test configuration would easily do it)
  • I'm still looking for ways to allow multiple Test Configurations to be run in one test run from MTM.
  • Overriding the browser environment variable automatically so I can use the same virtual machine for IE, Chrome and Firefox (obviously still needing a second machine for IE9 vs IE10).
  • Check if any other types of filters are available, I did see some xml floating around suggesting you may be able to filter based on Ram available.


Tuesday, February 12, 2013

Microsoft Test Manager / Lab Management Misconceptions

As we were not lucky enough to have Visual Studio 2010 Ultimate my knowledge of MTM and LM were fairly piecemeal, this meant when I finally got to install it as part of 2012 (available to Premium as of 2012) I found out a lot of things didn't work like I had expected or were simply completely missing.

Test Configurations and Machine Tags are not related
While at first glance it appeared I could label a machine with "OS: WindowsXP" and create a respective test configuration with the same key value pair, MTM doesn't seem to actually do anything with them.

Machine Tags appear to have no function at all, except to help organize machines.

Test Configurations seem mainly for manual testing, so you can manually test each software configuration you feel is important. If you record your manual test it seems to be available to every configuration and can therefore be overwritten. I'm presuming this to make it easy to rerun the same test on different machines which means you can use it to manually test a windows app in different versions of windows, or multiple versions of IE.  The same restriction applies to coded ui tests, you can only apply one coded ui test to a test case which must apply for all test configurations.

It should be possible to specify that the machine in the environment that the test is run must have x, y and z machine tags letting you test all sorts of combinations like Windows XP + IE 7 + No Flash or Windows 8 with UAC enabled.  Instead it appears that tests are split into groups of about 100 and dished out between the available test agents in the environment.

UserVoice Suggestion

Microsoft recommended Work Around - Configuration matrix testing using Visual Studio Lab Management although it's limited to 2 configs (eg IE / Firefox) and would need to be manually extended to support more and would need to be upgraded to 2012.  It also means you need extra capacity for virtual machines as you can't have IE and Firefox on the same machine.  There are issues with this approach in 2012 as the environments will have their agents automatically deployed (or removed) depending on their roles.  Basically you can't use MTM to update the environment, or use snapshots if you want to use this approach.

Another potential work around that I've come up with is to use a Test Controller plugin to filter available test agents.

Test Playback in browsers other than IE is only supported for Visual Studio Coded UI Tests
Ok, no big deal, I'll just convert my test to a Coded UI one (which seem to be more reliable anyway) then associate it back to the testcase.


MTM Recorded Tests don't support assertions
Unless you convert to VS CUIT you can't automatically perform checks at the end to ensure the desired behaviour occurred.  Again no big deal, CUIT seem the better option in the long run anyway.


Coded UI Tests aren't passed the Test Configuration
While it's easily possible to switch between IE/Firefox/Chrome in code, you can't seem to do this based on the Test Configuration.  You can use a csv DataSource attribute to run an automated test multiple times and switch the browser for each run, but that doesn't seem as nice as simply adding a new Test Configuration.

Now I'm confused as to what the point is of test configurations is for automated tests, as it appears to basically just run the same test 4 times in the same environment on the same test agent.

The Lab Management Build Template can only run one configuration
If you want to run the tests in multiple configurations it appears you need to setup multiple lab builds, an advantage of this does mean you can set a different environment per build which does mean you can test different versions of IE.

Lab Environments cannot have machines added (or removed)
You can add/remove machines from templates, but not environments once they've been deployed.  This just means you need to be careful that you don't need to customise machines heavily after deploy to get your environment up as you may need to redeploy on occasion.


Auto running tests on build in VS starts coded ui tests
Couldn't find a way to disable this, although I heart the latest CTP (2012.2) may be improving something in this area.


Conclusion
I don't mean for this to be an overly negative post, MTM & LM do seem to be very powerful but I am struggling to figure out how to best utilize them for my needs.

Thursday, February 7, 2013

Tfs Automation - keep user images in sync with active directory

In a previous post I shows how to set user images to be same as active directory in a console application.

I have updated this code to run as a recurring ITeamFoundationJobExtension .
 The first file creates the job in TFS to run every 24 hours, and runs it once right now.

The second file is the updated code that is now a TFS Agent Job, this required a few changes to the code to run against the server api instead of the client one. They're fairly similar but the client api tends to be a bit more friendly.
Installation
Copy the compiled job to c:\Program Files\Microsoft Team Foundation Server 11.0\Application Tier\TFSJobAgent\plugins and restart the agent service. Then run the console application to register the job. Finally check the state of the job by looking in the tfs database

select * from tfs_configuration.dbo.tbl_JobHistory where jobid = 'fa60c04e-c996-413e-8151-15933f5a2bac'