Peer review in the Microsoft Open Source labs
Microsoft needs to work extra hard to win over members of the Open Source community and their Open Source Software Lab is at the forefront of that work. One of their recent proposals is to effectively open up their entire research plan to peer review to make sure that they are working in the areas that their customers want.
Perhaps even more important they will open up each research activity to peer review before it starts so that hopefully the research results will have more authority. This blog post includes the details and the comments make interesting reading. The following extract gives some indication of how the peer review process could improve the quality of the research:
The peer review feedback could let us know (just a sample):
- if the hardware used was the kind of hardware that would be used in real world situations?
- if the topology made sense, or did we need to evaluate different topologies?
- were the workloads real?
- what were some common variances in workload?
- were we using the software to manage and download patches that our peers would use?
- were there factors like quarterly financial report generation that meant that a realistic experiment would need to span more than the period specified?
- did the assumed distribution of patches make sense or were there fewer or greater number of patches?
- did our peers actually care about the failures we would measure?