Enter Scrum

What follows is an excerpt from a longer work chronicling the crazy mishaps and happenings during a one year period at an unnamed, but unfortunately real, company.
ABC Corp. won’t be credited for being the inventor of the non-organizing self-organizing team but they at least deserved a spirit reward for the valiant effort in creating total chaos out of a zen like theory on running software projects.  You can’t say they didn’t try.Scrum, like some religions, is based on the ideas of one man whose work is subsequently ruined by charlatans, tent revival preachers and true believers. They refer to him as just Ken (Schwaber) and it made Daniel nauseated to watch the process over and over.

It’s based on accountability (which we’ll get into later) and other feel good, rah-rah traits, that an organization with poor management runs to while trying to figure out why their “teams” can’t get it “right”. It’s an easy sell to managers – the concept is that the teams should “self-organize”.  It’s like getting paid to manage but not being required to do anything except “assist” or “remove obstacle”.

The most painful part of Scrum at ABC Corp. was sprint planning.

A sprint is supposed to be a given period of time where the development team commits to getting certain pieces of work developed, tested and in shape to possibly ship. For instance, the team agrees to create a login screen.  In most organizations this is referred to as a “user story”, as it describes something a customer (user) wants to accomplish. The “story” is given enough detail on paper to get started and then it’s up to the developers, designers, business analyst etc. to complete the concept, development and testing of the feature. It turns it into a team effort.  Not so at ABC Corp., which we’ll bullet out for brevity.

1)    Everything is high priority.
2)    Bugs just “need to get fixed”.
3)    There is no plan – no initiatives to group bodies of work. Each item is an individual task.

What’s a task?  It could be this “login” story described, or it could be a fix to some spelling on a product page.  ABC Corp. management took it farther insisting on a decomposition of tasks down to the sub-atomic particle level – completely bypassing the whole concept of “a team moving a ball down a field” and insisting that everyone sit in a room together, all day, and identify all the tasks that would go into this login screen.  A more clever team member who unfortunately ended up in Nowhersville like Daniel coyly commented one day:

“Plan every single task for a two week period up front.  Decompose those into very small pieces.  Assign those pieces and demand they be completed.  Hmm, uh, that sounds kind of like a waterfall approach.”
The other ABC Corp. employees disagreed that it was waterfall, they insisted this the decomposition was necessary so that someone else could pick up those tasks if needed.

A change in management later on made this process even more fun by replacing the white board task decomposition all day activity with forcing each developer to decompose his tasks in their IDE, on their laptop, while sitting in a room full unproductive developers doing the same thing.

Now, Scrum is supposed to be in the Agile camp, hence, Agile methods. ABC Corp. management did not want the technical methods – what they heard was: “we get new features into production every two weeks. That’s what Scrum says.” And surely, they wielded this concept like a sword, slashing through any wary developer, technical manager or CIO without mercy.

Now, onto the accountability of Scrum (or any agile process): the developer was responsible, period.  Recall that management of the company believed they had moved to Scrum to get features out faster – if that is the case – why in the world would a feature ever be late or not completed at all.  Further, they knew from some random Scrum resources on the internet that developers are supposed to respond to Change.  That is a big “C” change – meaning, that it’s ok for a division manager to change his mind mid-sprint, ask for a large change and still have it delivered on time.  Again, this is what Scrum tells them.

(To be clear, Scrum is nothing like what is described above or how ABC Corp. did it.)


Distributed Teams Book

It finally published about a month ago.  My contribution is a chapter on Trust.  Yes, capital T.

Reasons Don’t Matter

I’ve been reflecting more on the philosophy and practice of Test Driven Development lately.  In truth, I’ve not been writing much code the last few months so I’m renewed in the difficulty of getting back into the mindset of writing tests before code.  I’m also renewed in spirit of the importance of following such a practice.

Some things I’ve run across recently that has helped get the thought process going again:

1) Re-reading the book The Productive Programmer and his reference to Ocams Razor.  The simplest explanation is usually the correct explanation: If testing is the rigor of software engineering then testing has a huge priority.  If testing is the largest priority it means it should be addressed paramount.  Hence, write tests first.

2) My favorite modern philosopher (whom, I know, I reference far to often) Nassim Nicholas Taleb of Black Swan fame posted on Facebook something to the effect of “If someone gives you more than one reason for something he doesn’t have a real reason…”
What does this have to do with TDD.  Well, see my article from a couple years ago – Wrong About TDD – I have a bunch of reasons.  Then I come up with more reasons in TDD: Facts and Fallacies.  I’m blue in the face with reasons, hubris and general hot air.  I have more reasons than fingers.  But, according Taleb’s aphorism all of my reasons mean that I may not have a good one.

Taking into account these two findings about my own thinking I vow to rehash my approach to communicating TDD.  Maybe not down to just one reason, but probably no more than two – the weary debater can stack the deck with opposing reasons all they like, a large balloon can still be deflated with one pin prick.  The reasons are probably better utilized as excuses someone doesn’t want to change as opposed to why they should.

There is more then sufficient confluence in favor of TDD by now.  One should learn and practice TDD if they are care about developing working software.

We all must change.  This is what we as humans do – we are born to evolve.

Open Source Contribution – Websucker

It may seem a class library like this would be readily available: download a web page, all the assets and alter said web page to reference the downloaded assets.  Well, when I needed to build this functionality for 4teaspoons I searched high and low – and not just for .NET code – any code what so ever.  No one seems to have to do this.  It’s more or less creating caches of a web page.  Maybe it’s so simple no one thinks to create a shared library – when that is the case it’s time to contribute the code back to the community.

In my instance I wanted to download a given page and save the HTML file and all assets up to Amazon S3.  This library comes with the S3 provider.  A provider class can be created to persist the assets just about anywhere – database, mongodb, filesystem, etc.  It’s really up to the developer what they need. I’ve taken care of the heavy lifting of parsing the pages and doing the transformation.

I hope this contribution is worthwhile and used.


Why REST over SOAP?

REST all the way right?  Or just another absolute software statement that we’ll retract or at least recant over the next few years.  I’ve used both REST and SOAP with success, but was posed with the question why use REST?

Good links on the subect:


“More people need to ask themselves questions like do I really need to use the same type system and data format for business documents AND serialized objects from programming languages?”

The AWS team asking customers why and which API they prefer – REST or SOAP.

I come back to the following:  If you want downlevel caching (meaning requests and responses go outside your network, unless you have a very big network), stateless communication and a clean API – prefer REST.

If you’re concern is a couple methods over the wire use SOAP, and if you’re on the Microsoft stack, put it all behind WCF.  What about other stacks?  I’m open to change this opinion.

On Performance

The concept of good performance versus bad performance has struck me as a little too vague for a while. We had this problem at another ecommerce shop during design and code reviews – “does this perform well?” – but too often it was driven by gut instinct or some held assumptions about a particular technology. There is nothing wrong with gut instinct; it’s a key to being able to make decisive decisions, but maybe not the best for making a series of informed decisions to improve a product or delivery to a customer over the long term.

Serendipitously, last fall I opened of a copy of Communications of the ACM and found a great two part series on “thinking clearly” about performance which really helped clarify concrete items to look at from a design and architecture view (I like metrics, so that helps.)

From there I was able to help our team start looking less at a general performance feeling, but considering real response time and throughput to understand concurrency and capacity.  These concepts present a dichotomy in traditional computing architectures so dissecting each separately, and then bring them back together is helpful.  The links below present this much better than I.

On the web, and especially ecommerce, we have to focus a very critical eye on the whole delivery chain to the customer/user/browser: First Byte server response times, content download, load distribution, CDN configuration, etc.

Over the years I’ve seen very smart developers join our team that simply haven’t had serious web exposure and have been more concerned about a LINQ operator than the fact that there is a queuing delay at the service layer or the JavaScript files are gigantic and not compressed (as a really simple example and not to discredit troublesome code). Or the alternate, when there is too much premature optimization done and project delivery is thrown off.

We had a great peak season from a technology perspective so I want to share these articles as much as possible.




Distributed Team Book to be Published

I’m excited – I’ve wrapped up the final edits for a book project collaboration I’ve been part of.  The title of the upcoming release is: Distributed Team Collaboration in Organizations: Emerging Tools and Practices, due to publish this spring.

I was very eager, and humbled,  to participate in this project when Dr. Kathleen Milhauser invited my chapter submission in late 2009.  I had already been working on distributed teams for about 7 years (with and without offshore teammates), was interested in the topic of trust and jumped in.  It was much tougher than I anticipated – there is a wealth of knowledge out there concerning trust, but much less on distributed team members – which is why I think this book will be well received.

My hope is that this book is a contribution to the team management and software architecture community and the on going dialog. Expanding our understanding of how members of a team are built up, fit in and are cared for can lead to varied decisions throughout the life-cycle of a project.

At the end I’m quite pleased with my chapter, which covers: trust is in a professional setting, incongruent teams, agitators to building trust, induction problems,  and socio-emotional needs, amongst others.  There are many other great contributed chapters including technological changes for distributed teams, executive overviews and more.

Special thanks to my wife and my encouraging colleague Krams Ramasubramanian.