Check out this blog post:
Essentially you need to create a scheduled task that runs
As often as you deem necessary (I have chosen hourly).
Here is an exported scheduled task that you can use:
<?xml version="1.0" encoding="UTF-16"?>
<Task version="1.3" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
<Arguments>start w32time task_started</Arguments>
This shall be a dumping ground that I keep updated with useful resources for optimising web sites.
Tweeps to Follow
Three simple steps:
openssl pkcs12 -in mycert.pfx -out mycert.txt -nodes
Then, to generate your encrypted private key
openssl rsa -in mycert.txt -text -out mycert.key
And your certificate:
openssl x509 -inform PEM -in mycert.txt -out mycert.cer
The other day something happened that caused the insert part of me running the SQL Azure Migration Wizard to abort, leaving me with a bunch of local .dat files and no data on my destination server.
To upload the data, you need to run the following command:
bcp database.dbo.tablename in dbo.tablename.dat -n -U username -P password -S remote.server.address.com -b 200 -h"TABLOCK"
You can play with various values for -b, the batch size. I found 200 worked reasonably although I didn’t investigate too much.
Here is a great post on troubleshooting your AWS ELB.
The point that caught me out for about 10 hours today was that if you have your ELB configured for multiple Availability Zones, it doesnt matter if your assigned instance list doesn’t contain any instances from some of the AZs, it will still route traffic to those zones, which will get lost and result in a 503 (or 504/324).
So, DONT assign AZs that dont have any in-service instances running.
You want your site to issue far-future cache expiry values for resources like CSS and JS to reduce bandwidth usage and decrease page load speed.
However, when you release new code, you want everyone to receive this a.s.a.p.
But how do you achieve this when they all have cached versions that are cache-valid for a week or more?
Here’s what I do.
Create yourself a class such as this:
public static class Cacher
public static readonly string Value;
Value = DateTime.UtcNow.ToString("yyMMddHHmmssfff");
Then, change your script and css tags from:
<link rel="Stylesheet" type="text/css" href="/assets/css/all.css" />
<link rel="Stylesheet" type="text/css" href="/assets/css/<%: Cacher.Value %>/all.css" />
You can then use a mod_rewrite/asapi_rewrite rule to remove the value:
RewriteRule ^assets/css/[^/]+/all.css /assets/css/all.css [L,NC]
The reason you want the value in the path and not in the query string is that some caches refuse to cache content on URIs which include querys, regardless of the cache-control headers.
Alternatively, you could make the value be the current assembly version. It depends on your use-case.
Do you want to pull GA data into your site’s admin area? Here’s how to do it as simply as possible.
- Read this: https://developers.google.com/analytics/devguides/reporting/core/v3/reference
- Then this: https://developers.google.com/accounts/docs/OAuth2WebServer#refresh
- Then this: https://developers.google.com/analytics/devguides/reporting/core/dimsmets
Right, now go here: https://code.google.com/apis/console/ and create yourself an App, and a Client ID for a Web Application, so you end up with something less blurry than this:
Next, replace the XXXXXX in the below with the Client ID from above:
and visit this in your browser, click “grant access/allow”.
You’ll end up on:
“yyyyyy” is your “Code”, keep this.
Next, open up Fiddler and issue this request:
POST https://accounts.google.com/o/oauth2/token HTTP/1.1
Replace YYYYY with your Code, XXXXX with your Client Id and ZZZZZ with your client secret.
Bang, get a response like this:
"access_token" : "PPPPPPPPP",
"token_type" : "Bearer",
"expires_in" : 3600,
"refresh_token" : "QQQQQQQQ"
Store the access_token and refresh_token, you’ll need these.
Now, make yourself a request!
replace DDDDD with the IDs of the profile you want to report on, find that here
GET https://www.googleapis.com/analytics/v3/data/ga?ids=ga:DDDDDDD&metrics=ga:visits&start-date=2012-06-01&end-date=2012-06-25 HTTP/1.1
Authorization: Bearer PPPPPPPP
Then, when your token expires, request a new on like this:
POST https://accounts.google.com/o/oauth2/token HTTP/1.1
Next, send me beer for writing the only tutorial on the ENTIRE INTERNET that explains this process concisely.
Here are some of the techniques I use to optimise my builds. Not all of them will be appropriate to you, and not all of them conceptually work together.
Regardless, these techniques can considerably reduce your build time.
Get MsBuild using all your cores:
you can set
to tell it to use x cores if you like.
Say you have a project structure like this:
where Core and Persistence are class libraries and Web and Api are web applications.
Do you really need Core and Persistence to be separate assemblies? Do you ever use one independant of the other? Are you really building a highly modular, reusable solution?
There is a huge overhead in firing up the compilation process for each assembly. Keep this to a minimum with as few assemblies as possible.
You might also have the following test projects:
Why? You can most likely reduce these to a single assembly and use namespaces to divide the various assembly tests.
The biggest waste of time in my previous build mechansim has been redundant building.
Say you have the following dependency tree:
If you call MsBuild twice, once for Web and once for Api, you will needlessly build Core and Persistence twice.
There are two ways to avoid this, one simple and one complicated.
Complicated – Manage the dependencies yourself
For me, this is a no-go really. Its more effort than its worth for simple projects, and its too unmaintainable for large projects. Essentially it involves building each project directly with msbuild without the
ResolveReferences target, then xcopying the artefacts around to each project and fiddling with reference paths. It gets very messy, very fast.
Simple – Build a single project
Option one: Just build your test assembly.
Continuing the same example from above, your dependency graph would look like this:
You can then use something like the following msbuild command:
msbuild src/MyProj.Tests/MyProj.Tests.csproj /t:ReBuild;ResolveReferences;PrepareResources;_CopyWebApplication /v:m /p:OutputPath=../../build /m /p:BuildInParallel=true
_CopyWebApplication target, this “publishes” the web apps.
This will result in the following file system structure being created:
All your assemblies will be in
build/, as well as a normal “published” version of each site under
You can then call your test runner on these
Option two: Build a custom single project
Perhaps you don’t have a single test project to build, or you only want to build a subset of all your projects. In this case, you can make a custom project file, and just build that!
<ProjectReference Include="src\MyProj.Web\MyProj.Web.csproj" />
<ProjectReference Include="src\MyProj.Api\MyProj.Api.csproj" />
<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" />
This way, each project is only built once, with the artefacts reused for each referencing project
One thing I’ve found is sometimes, a compiler error is thrown:
CSC : fatal error CS2008: No inputs specified. I’ve got some projects that do this, and some that don’t and I’ve not been able to identify the difference that causes it.
Regardless, the solution is to include a .cs file (such as
AssemblyInfo.cs) in the above project. This does result in an otherwise unwanted assembly being produced, but you can just ignore it. I’ll update this post if/when I find out more.
ILMerge or worse, aspnet_merge
Update: The below doesn’t work, but it will. Working on a patch for ILRepack that will fix this. Stay tuned.
Do you precompile your views with
aspnet_compiler? If you do, you probably want to combine the multitude of
App_Web_xxxxxx.dll assemblies that get created to reduce your app’s startup time and reduce its memory footprint. If you use
aspnet_compiler that comes with the Windows SDK, you’re gonna have a bad time. Use ILRepack instead. Its like ILMerge, but written against
Mono.Cecil – so it’s uber fast.
Say you have your artefacts in
build/MyProj.Web, run this:
ilrepack /targetplatform:v4 /parallel /wildcards /log /verbose /lib:build/MyProj.Web/bin /out:build/MyProj.Web/bin/MyProj.Web.Views.dll build/MYProj.Web/bin/App_Web_*.dll
You can even go one step further and merge the assemblies into your web assembly for a single DLL:
ilrepack /targetplatform:v4 /parallel /wildcards /log /verbose /lib:build/MyProj.Web/bin /out:build/MyProj.Web/bin/MyProj.Web.Views.dll build/MyProj.Web/bin/MyProj.Web.dll build/MYProj.Web/bin/App_Web_*.dll
Using the Java YUI Compressor? STOP! Use the YUI Compressor MSBuild task instead, you will reduce the time this takes by several orders of magnitude. The Java compressor only accepts one file at a time, which causes Java to be fired up for every file you want to compress, this is slow.
There you have it, lots of ways you can make your slow build process run like lightning!
I simultaneously work on multiple web projects, and have a back catalogue of scores of sites that could need my attention at any time.
I always run my sites locally in a fully fledged IIS site (rather than using Cassini) which means each site needs its own hostname.
Until recently I had been managing this with my hosts file, simply adding a new line:
for each site. However last week I reached breaking point as my hosts file was about 3 pages long.
Enter Velvet. Velvet adds wildcard support to your hosts file by acting as a simple DNS server that you can run stand alone or (preferably) as a windows service.
I have now reduced 3 pages of hosts entries into a two lines:
actually theres a few others, such as wildcard mappings to my colleagues machines:
This means I now rarely ever need to touch my hosts file, at least not for standard day-to-day project work
Ultra time saving win.
Check out the project on github.
Feature suggestions welcome!
We all suck at estimating, regardless of how experienced we are. This is a fact that you should accept. Most of us are either ignorant to this or in denial. There are many ways we try to hide our inadequacies, mostly revolving around mathematical transformations of the form:
E’ = mE + c
I.E. make an arbitrary estimate (E), multiply it by some amount (m) and add a bit (c).
I’m not denying that there is some sense to this, you can spend considerable time and effort refining your favourite m value by tracking your velocity, having regular retrospectives and reflective analyses etc.
This method alone however is ignorant to significant mental “quirks” which affect the way you think and reason.
The effects I am talking about are:
- The “halo” effect
- Framing effects
- Attribute Substitution
- Base-rate neglect
The “halo” effect
The “halo” effect is defined as “the influence of a global evaluation on evaluations of individual attributes”. What this means in the realm of software development estimation is that you are likely to estimate the individual parts of a project with a bias towards how you feel about the overall project.
If you’ve formed an opinion that overall the project will be easy, all your estimates for the component parts are likely to be lower than if you viewed the project as difficult (known as the “devil” effect).
- Ignore prejudices
- Judge tasks independantly
- Don’t “do the easy ones first”
Framing effects refers to the way our mind perceives data differently depending on how it is presented. For example, food which is “90% fat free” sounds much better than food described as “10% fat”.
When estimating tasks, we are very likely to bias our judgement based on how the requirements are presented. For example, requirements which are positively worded/presented and which sound easy/appealing are much more likely to receive lower estimates.
- Has the way the requirements been worded affected your interpretation?
- Are your judgements of a specific problem being clouded by its surroundings?
Overconfidence and Substitution
Little weight is given to the quality or quantity of evidence when we form a subjective confidence in our opinions. Instead, our confidence depends largely on the quality of the story we can tell ourselves about the situation. What this means is that we are very likely to be confident in an estimate if we have convinced ourselves that we know what we’re talking about. This may sound obvious, however the devil lies in the detail. Do we really know what we are talking about? Our brains do not like doubt and uncertainty, we are much happier answering questions positively rather than negatively. When estimating a task, we are very likely to jump to a conclusion (underestimate) if the task is familiar to us. How many times have you said “oh yeah that’s simple, it will take X hours” without _really_ thinking it through? This is known as the mere-exposure effect.
This is where another problem creeps in, attribute substitution. When our brains are faced with a complex question, our sub-conscious often substitutes the problem for a more familiar, easier problem. This often happens without us realising. This leads to misunderstandings of the problem domain and therefore inaccurate estimates.
- Ask yourself why you are confident
- Are you biasing because of familiarity?
- Have you really understood the problem?
Base Rate Neglect
Base rate neglect or base rate bias is an error which occurs when assessing the probability of something and failing to take into account the prior probability. I use the term here partly in the strictest sense (as defined by Wikipedia above) and partly in a more general sense.
When we estimate tasks, we often fail to account for the “surrounding” or “prior” cost of the task, such as the complicated merge that will be required after the change, or the reliance on the third party delivering on time, the API documentation being adequate etc.
- Consider all the implications
- What assumptions have you made? – Are they sensible? Really?
- Refuse to estimate unknowns
Anchoring is an effect that causes you to bias your estimate based on estimates you’ve already seen or produced. If two developers are discussing an estimate and the first says “10 days”, the second developer is more likely to produce a number closer to “10 days” than if they hadn’t spoken previously. This is one of the main benefits of using planning poker. By avoiding the influence of others until you have produced your estimates, you are much more likely to get a broader range of estimate values.
You may think broader means worse, but this is not necessarily the case. If one dev thinks a task is “1 day” but another thinks its “10 days” – you’ve identified a problem. Either you have a huge skill disparity, or there has been a fundamental misunderstanding by one or both parties!
- Try to view each estimate in isolation, dont let previous numbers skew future ones
- Don’t confer with other estimators until you all have your own value, then justify why they are different
When producing estimates be aware of biases and make an conscious effort to spot when you might be making them. Being aware that you are likely to be biased is the first step in producing more accurate estimates, actually counteracting the biases in practice can be much harder
- Estimate alone at first
- Get 2nd (or more) opinions on estimates, but be careful not to cause framing or anchoring biases
- Make a conscious effort to not misinterpret a requirement based on its wording
- Be sure you’ve not jumped to conclusions because of familiarity of the problem
- Review the estimates you produced last. Are they biased based on the estimates your produced first?
- Estimate each task in isolation. Dont let your opinions of other tasks or the whole project effect individual parts
If you are interested in learning more about the psychology of decision making and biases and how you can make personal improvements (not just in development estimation) then I highly recommend you get your hands on a copy of the following: