Ah-gee-lay … it must be Italian!

Like all things that become the standard, agile[1]/scrum is seeing a bit of a backlash. I just happened across a couple of interesting posts that lay out interesting arguments against the new standards.

First, a long article from OK I Give Up:

This is actually my biggest gripe about Scrum. As mentioned above, in Scrum, the gods of story points per sprint reign supreme. For anything that doesn’t bring in points, you need to get the permission of the product owner or scrum master or someone who has a say over them. Refactoring, reading code, researching a topic in detail are all seen as “not working on actual story points, which is what you are paid to do”. Specialization is frowned upon. Whatever technology you develop or introduce, you are not allowed to become an expert at it, because it is finishing a story that brings the points, not getting the gist of a technology or mastering an idea. These are all manifestations of the control mania of Scrum.

I do think there is something nefarious to the godliness of points in the Scrum process (and the immediate, inarguable counter-argument that if it isn’t working for you, you’re doing it wrong).

Put in a slightly more graphical way:

Tasks

(Hilarious image from RobBomb)


  1. “Ah-gee-lay” – if you don’t get the reference, check out this video:  ↩

Unintentionally Eating Some Delicious Cookies

I use a cool little web app called ThinkUp to keep track of stuff I post to Twitter and Facebook. I use the self-hosted version, running on my own server, and I’ve had it running for a year or so and never had a problem.

This weekend, I went to login to see if ThinkUp would show me anything interesting. Except I couldn’t log in. Every time I tried to login, it would just kick me back to the login screen. Clearly something had gone wrong. I watched the login requests via Developer Tools in Safari and Chrome and noticed that I was not getting a PHP session cookie. That’s certainly odd—setting a session cookie is pretty straight forward and I’ve never seen it fail.

As is typical in this sort of issue, I debugged it ass backwards. I spent an hour or so writing test scripts, changing permissions on session directories, and changing session settings before realizing I was debugging things entirely wrong.

My stack looks something like this:

nginx -> varnish -> apache2

I realized that I should start by looking to see if the request to Apache2 was getting the cookie headers back. I ran a quick curl command, and sure enough, the cookie headers were there when talking directly to Apache2. Logically, I then ran the same curl command, changing it to talk to Varnish. Sure enough, the cookie headers were gone.

Finally, I’d figured out where my cookies were getting eaten (haw haw).

Diving into the Varnish config, it was pretty quickly obvious what had happened. When adding Varnish caching to support this here blog, I added this line,

unset beresp.http.set-cookie;

which basically says “get rid of the cookie header we’re sending back to the user”, which allows us to cache more stuff. Of course, that was getting set far too liberally, dropping the PHP session cookie, and making it so I couldn’t login. A couple of tweaks and restart later, and all was well.

This sort of thing happens to me somewhat frequently. I muck around with some settings on my server, and everything works great for my blog or my static site, but I forget about other things I have running, and a few weeks later, I notice they’re broken, but now I have no idea why. It’s a pretty good case for using a tool like doing, to log things I do (that aren’t necessary driven by my OmniFocus to-do list) so that I don’t spend hours debugging my self-inflicted problems.

Always Use Braces (and BSD KNF Style)

There’s been plenty of coverage of the serious SSL issue that was identified in Apple’s iOS/MacOS stack.

From ImperialViolet, here’s the bug:

if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
    goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
    goto fail;
    goto fail;
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
    goto fail;

That second goto fail; is the problem. Since those if clauses aren’t wrapped in braces, if you ever get pat the first two if statements, you’ll always hit the second goto fail;, leading to the error.

On our development team, and in my personal programming, I’m a pretty big believer in wrapping your control statements in braces, and in having the braces on the same line as the control statement. I’m a fan of something close to the BSD KNF Style, using the cuddled else, which looks something like this:

if (foo) {
    stuff;
} else {
    other stuff;
}

The reason I’m a fan of this is two-fold. First, you have braces around your control statements, so you’ll avoid that Apple bug above, and it’s clear what block your code belongs to. Second, and this is why I’m keen on this particular style: you won’t accidentally break up joined control statements.

For example, if you write your code without the cuddled else …

if (foo) {
    stuff;
}
else {
    other stuff;
}

that’s generally no big deal. Until someone isn’t paying attention and breaks up your if/else block, or drops a line of code in there. Now, at best, you’ve got a compilation error. In the worst case, you end up with random code executing (because maybe someone adds a different if statement in between).

Coding styles and guidelines aren’t a replacement for good code review or QA, but they can help to identify or flag issues like this before they slip through.

When Caching Bites Back

We have an application on our site that was rewritten a few years back by a developer who is no longer with the company. He attempted to do some “smart” caching things to make it fast, but I think had a fundamental lack of understanding of how caching, or at least memcached works.

Memcached is a really nifty, stable, memory-based key-value store. The most common use for it is caching the results of expensive operations. Let’s say you have some data you pull from a database that doesn’t change frequently. You’d cache it in memcached for some period of time so that you don’t have to hit the database frequently.

A couple of things to note about memcached. Most folks run it on a number of boxes on the network, so you still have to go across the network to get the data. [1] Memcached also, by default, has a 1MB limit on the objects/data you store in it. [2] Store lots of stuff in it, keep it in smaller objects (that you don’t mind throwing across the network), and you’ll see a pretty nice performance boost.

Unless … someone decides to not cache little things. And instead caches a big thing.

We started to notice some degradation in performance over the past few months. It finally got bad enough that I had to take a look. It only took a little big of debugging to determine that the way the caching was implemented wasn’t helping us: it was actively hurting us. Rather than caching entries individually, it was loading up an entire set of the data and trying to cache a massive chunk of data. Which, since it was larger than the 1MB limit, would fail.

You’d end up with something like this:

  • Hey, do I have this item in the cache?
  • Nope, let’s generate the giant object so we can cache it
  • Send it to the server to cache it
  • Nope, it’s too big, can’t cache it
  • Oh well, onto the next item … do I have it in the cache?

Turns out, this wasn’t just impacting performance. It was hammering our network.

Screen Shot 2013 12 12 at 11 51 30 AM

The top of that graph is about 400Mb/s. The drop off is when we rolled out the change to fix the caching (to cache individual elements rather than the entire object0. It was, nearly instantaneously, a 250Mb/s drop in network traffic.

The lesson here? Know how to use your cache layer.


  1. You can run it locally. It’s super fast if you do. But, if you run it locally, you can’t share the data across servers. It all depends on your use case.

     ↩

  2. That 1MB limit is changeable

     ↩

Challenging

A couple of months ago, I changed roles (part time) in the company. I moved to a team with the premise that a small team of developers, unencumbered by the process and inertia of a large company, could develop and iterate on products and find things that were good fits for our customers. In a big company (which my little company now is), it was a great—maybe ideal— opportunity. A small, (little a) agile team cranking out product.

As of two weeks ago, I’m back in the main office, managing the entire team of developers, while in the midst of the business adopting the (big a) Agile methodology. It’s probably been two of the most challenging weeks of my career.

Our development process was never “waterfall”, but it wasn’t “agile”. It was some in between mish-mash of working on what was important and reprioritizing weekly without the understanding that not everything deserved to be worked on. Not everything was important to the business. Loads of horse trading so that everyone got their slice of resources, without regard to how much value those resources might generate for the company.

In that regard, agile (or maybe that’s Agile) has definitely helped. We’re slowly helping people realize that we don’t have infinite resources, and therefore, we can only work on the really important stuff (and maybe some slightly less than really important stuff). But no “because I want it” stuff.

It’s not been easy. It’s been a slog. As a company, other parts of the business (not the development team) are still getting on board, which is a challenge. There’s the culture clash: do you bridge the gap between the old process and new, or push everyone in the deep end. The former means slower adoption, the latter can lead to some interesting conflict.

All this is going on while still needing to ship code and re-learn how to manage a team of people. With some people pushing to move faster than the organization (and team and individuals and processes and technology) might be ready to.

It’s going to be an interesting challenge.

Reading List to Instapaper Update

I got a little bit giddy when I had the first couple of people follow (and comment on a couple of stupid bugs) my GitHub repository of my “Reading List to Instapaper” tool.

This whole social coding thing sort of works.

I made some small changes to clean up the code a bit (and comment it a bit better). Nothing dramatic—it still works the same way, but simply got rid of a requirement on Rails, which was needlessly in there.

The next step is to try to make a handy auto-installer, so you don’t have to play with launchctl yourself. But, in the meantime, check it out if you use Instapaper and you’re on a Mac.