Top 5 signs that your Agile/Scrum project has a problem…

A lot of teams are following Agile development with Scrum or some variant thereof. Here’s a handy list to check if your project has a problem,

5. Your burn-down chart looks like a dead person’s ECG.

4. Only chickens come to your daily standup meetings.

3. You have two sprints a year.

2. The product owner and scrum master have a cage fight during the review meetings, and everyone cheers.

And the number 1 sign that your Agile project has a problem,

1. Your team has a good laugh when the ‘potentially shippable’ clause is mentioned.

Digg This
Advertisements

Software Management Worst Practices

Managing software is never easy, there are a number of issues that, if not dealt with properly, will quickly lead you down a path of bad software. Some issues are universal to project management, while some may be specific to your team. Identifying problem areas is the first step towards solving them. The following list captures some of the common problems that plague software management.

7. When ‘ad-hoc’ code becomes production code (or “Lets make it work for now”): We all have written ad-hoc code, code that will “make-it-work-for-now”. This is usually during crunch-time with a deadline looming, but sometimes we write ad-hoc code during a regular development cycle. You know what I’m talking about – “lets just hardcode this string for now”, or “lets just use this algorithm for now”, things like that. Of course, the solution is refactoring, but (already overworked) developers are reluctant to revisit and change old code (who wants to change working code?). Also, major refactoring may require major retesting. The tragedy is that this type of code ends up in production and can cause headaches later in the form of sluggish performance, hard to track bugs, and maintainability.

6. When legacy software limits developers (or “This is how we do it”): As developers we often need to leverage legacy software and frameworks. This isn’t bad in itself, but it impacts architectural decisions, new functionality decisions, and choice of tools and technologies. Common scenarios range from having to use a pre-existing logging framework or a custom data layer or an old database. This can define (and sometimes limit) the design and structure of the code we write. We are no longer able to code and implement what we want and the way we want, since we need to adhere to a legacy framework or a legacy policy.

5. No or insufficient code documentation (or “Its low priority”): How many of us can honestly say that we are working on well-documented software? Not many, I’m sure. Is your functional spec up-to-date? Is your detail design up-to-date? Is the code well documented? If a new developer joins the team, how easy is it for him to understand and contribute to the code? Most project managers correctly assign documentation to be lower priority than actual coding, but most developers incorrectly interpret this as optional work. Keeping the code documented doesn’t really take a lot of the developer’s time, they just don’t have the awareness of its importance.

4. Over dependence on a few people (or “That’s not my problem”): In most projects only a handful of people know the overall design and architecture. They are thus able to provide guidance and predict the impact of changes on different parts of the system. But what happens when decisions need to be made in their absence? Most other developers are only concerned with their own module or sub-part. In an ideal world, all developers would have a good understanding of the project. But since we live in a less than prefect world, we can start by rotating tasks among the developers. Every developer should get a chance to work on different tasks like keeping the documents updated, making sure the application installs right, and making sure testers get the right bits. Making more developers familiar with more parts of the system will definitely help in overall project quality.

3. Working with incorrect metrics: I’ve worked for companies that have LOC (lines of code) and bugs per LOC as the sole metric for measuring developer productivity and overall project health. While these, admittedly, do provide some indication, they are hardly enough. Incorrect (or insufficient) metrics generate faulty data which can ultimately effect team morale, the management’s trust in the team, the business’s confidence of the product, and worst of all, the individual developer’s self-confidence. Other methodologies for project management exist (XP, Agile, and others), and I’ve personally tried some of them with varying degrees of success. The best advice would be that teams work out a management method that everybody is comfortable with and then stick to it.

2. Underestimating the importance of testers (or “Our developers do all the testing”): Think that since the code passes the unit tests the developers wrote the code is bug free? Think again! Developers write unit tests so that they pass, ask any developer s/he’ll tell you. Testers, on the other hand, write tests so that they fail. Developers do provide a first line of defense against bugs, they are the infantry, they make sure that the code provides the functionality it claims and is robust in common failure scenarios. But this needs to backed up with heavy artillery – the testers. Sometimes another way of thinking, another pair of eyes, or another person using the code is all it takes to find bugs. Does this mean that finding a bug is solely a tester’s responsibility? Of course not! Developers and testers compliment each other and need to work together to find bugs.

And the number one worst practice in software management:

1. Not budgeting enough time for development (or “I need the compiler by Friday”): This is the single most common complaint among developers. Tight schedules mean more emphasis on coding than on design and also less testing time. Long hours make developers grumpy and they may make more mistakes. Even though “enough time” can never be satisfactorily defined, as a rule of thumb, development time required is usually 2 to 3 times more than what management decides it to be. This is because development time needs to incorporate changing business rules, feature addition/change requests from management/customers, integration issues that may crop up, and many other unforeseen factors that managers don’t plan for.

Tracking Software Complexity Part II

The last team I worked with had a grand total of 4 members. That is, four people, in the entire company, were working on the project. We were the business, development team, and the testing team all rolled into one. The actual code was quite complex, we used all sorts of C# 2.0 constructs like generics, generic delegates, nullable types, and anonymous methods. The business logic consisted mainly of currencies, securities, portfolios, and equities. A lot of statistical calculations were involved and required lots of PInvoke calls to the NAG C mathematical library. We met once a week, and our build and deployment consisted of making sure our code executes on our machine, and then uploading the bits to Source Depot.

Compared to that, in my current project, there are about 100 people involved (a whole floor, if you can believe it!). This includes the business, the managers, the testers, the support, and the developers. We work on a part of an extended system. The actual code I write here is quite straightforward. But consider some of the problems we have had to face while deploying our solution. We have a bunch of (10+) windows services (most of them interfacing with external systems), some web services, and, of course, a SQL server backend. We have to deploy on three separate environments. Each environment consists of three machines (two machines running our web and windows services and one sql machine), spread over at least two domains. The services run under different accounts and have different accessibility permissions. Add to the mix clustered servers, virtual servers, certificates for authorization, and services sending out emails, and you can imagine what a great time we had deploying our solution.

Here is a partial list of issues we ran into (hopefully, we have learnt from our mistakes),

  • a critical network password expired and had to be reset, while we were deploying the bits
  • an external system had rebuilt their database and so one of our services was failing
  • configuration files had to be changed to match different drives for installation
  • Microsoft Enterprise Libraries weren’t installed on one of the machines
  • we ran out of disk space on a machine, while sanity checking our deployment
  • and (my personal favorite), we installed the wrong bits

The point I’m trying to make here is that software complexity isn’t just about code complexity (LOC, cyclomatic complexity or any other metric), or algorithm optimization, or database design, or patterns used. Software complexity is also about the things that don’t go into the code – the pressure from business, the meetings, the deadlines, the team, the build, the deployment, and other such factors. These affect not only the quality of the code (directly or indirectly), but also mean the difference between great software and barely usuable software.

Tracking Software Complexity

Building great software has as much to do with the actual design and coding of it, as it has to do with project management. As they say, behind every successful software project is a team that has dedicatedly followed some form of software methodology. As team and code size increases, it becomes more and more imperative that some methodology be rigorously followed.

So what do I mean by ‘some form of software methodology’? I mean a software development method that works for the team. Specifically, I’m talking about unit test policies, build integration policies, code ownership policies etc.

Now the method doesn’t have to be TSP-PSP/Agile/Extreme Programming (or some other known method), it could be something the team came up with, feels comfotable with, and most importantly follows dedicatedly.