TheShed

.Net book recommendations

Category: Library
#.net #programming #books

Top 10 books everyone .Net developer should onw

Test Coverage

Category: Library
#Engineering #Test #Code Coverage

What's the right amount of test coverage? Test Coverage and Post-Verification Defects: A Multiple Case Study](https://onedrive.live.com/redir?resid=493B8BCE8FC1C3DA!26920&authkey=!AFbVeq8YcyR0jDQ&ithint=file%2cpdf) provides some insights. Interestingly, they find:

that the test effort increases exponentially with test coverage, but that the reduction in field defects increases linearly with test coverage.

In other words, it takes more and more effort to reduce field issues. Based on their study most of the benefit accrues up to around 80% code coverage.

Conway's Law

Category: Library
#engineering #organisation #structure

organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations

—M. Conway in How do committees invent Datamation magazine, 1968. There's a PDF version with the original layout but with bitmapped fonts.

http://www.melconway.com/Home/Conways_Law.html http://en.wikipedia.org/wiki/Conway's_law

How Complex Systems Fail

Category: Library
#Systems #complexity #failure

How Complex Systems Fail Richard Cook, University of Chicago.

Articulation of the steps that contribute to failure by the nature of complex systems. Focused on systems such as transport, health care, and power generation, with a corresponding implicit view of the hazardous nature of those systems.

When reading for software engineering I transposed bugs for accidents and quality for safety.

Quotes & Takeaways

For software development I found Cook's #11 fascinating:

Organizations are ambiguous, often intentionally, about the relationship between production targets, efficient use of resources, economy and costs of operations, and acceptable risks of low and high consequence accidents. All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure.

Technical Debt 101

Category: Library
#Programming #Engineering

A primer about technical debt, legacy code, big rewrites and ancient wisdom for non-technical managers

PDF of medium post

See also http://blog.ionelmc.ro/2014/08/14/the-three-sins-of-software-development/

Thompson ACM Turing Award on Trust (or how to backdoor a compiler)

Category: Library
#Programming #Security

Ken Thompson received the ACM Turing Award. It's a BigDealTM.

To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software

Thompson's paper describes why that is.

Eskimo Hoax

Category: Library
#Language #Stories

Eskimo's (Inuit; ᐃᓄᐃᑦ) have 100s of words for snow. Except they don't. The Eskimo Hoax explains.

Netflix HR

Category: Library
#Organisations #Work

Richard Branson hit the UK news recently with some comment/statement/assertion about his employees not having set vacation limits or a policy to follow. Unsurprisingly, it reminded me of the Netflix HR non-policy slides that hit the tech sphere a few years back. I was interested in how Branson's take was one thin slice of a bigger picture articulated by Netflix. Assuming that the Netflix model works, would taking one part of it in isolation work? Probably not.

Netflix Culture: freedom and responsibility slides on slideshare .

Effective PowerShell

Category: Library
#Powershell #Programming #Book

Effective PowerShell

Most A/B Tests are Illusionary

Category: Library
#Experiment #Data

Paper

MOST WINNING A/B TEST RESULTS ARE ILLUSORY Martin Goodson (DPhil) Jan 2014

Summary

Demonstrates how application of standard statistical techniques are equally valid when applied to A/B testing, and how missing these can result in erronous conculsions being drawn from A/B test results.

Statistical Power

Simply, that the size of the sample you measure increases the power of the result where power is the reliability of the measure to indicate a difference when there really is a difference.

For A/B testing this means you need to run an experiement for long enough that what your measuring is actually a difference. The paper includes a methodology for calculating sample size.

Multiple testing

• Performing many tests, not necessarily concurrently, will multiply the probability of encountering a false positive. • False positives increase if you stop a test when you see a positive result.

Regression to the mean

Over a period of time even random results will regress to the mean. If you use a smaller time window you may identify early winners that are in fact random winners. Look out for the trends over time — if an initial uplift in A/B tests falls you may be observing regression to the mean.

Final quote

You can increase the robustness of your testing process by following this statistical standard practice:

• Use a valid hypothesis - don’t use a scattergun approach

• Do a power calculation first to estimate sample size

• Do not stop the test early if you use ‘classical methods’ of testing

• Perform a second ‘validation’ test repeating your original test to check that the effect is real

Related