Getting to CD In An Org Predisposed To Hate It

I thought this presentation by Dan McKinley was really interesting and resonated heavily with my experience in helping to shepherd an organization that was pendulum swinging from everybody hacking production, to nobody getting to do releases until you filled out a form in triplicate, to an org that was doing 8–10 releases on most days.

We never got to continuous delivery (CD), for a bunch of reasons, but mostly:

  • Cultural (it scares the crap out of the systems and support teams, even if it might be better for them)
  • Technical (it requires good tests and good dev/beta systems, and we’ve always been underinvested in the resources to help there)
  • Organizational (we’ve rarely settled into a structure that allowed our teams to develop the discipline)

But we did continually get better, and I’m guessing in another year or so, with the right people pushing, I don’t think a real CI (continuous integration)/CD pipeline is unreachable.

Some bits from the presentation that were particularly resonant with me …

Namely, we had a lot of process that was prophylactic. It was built with the intent of finding production problems before product

As your organization gets bigger (and not even, like, really big, but just bigger), there are lots of people who think their job is to protect the production org by creating all sorts of process to make it really hard to get something to production. In reality, all that process just makes people pay less attention, not more attention. There’s always somebody else who is more responsible for the code going live, being tested, being right. The further away you are from being on the hook, it’s natural that you pay less attention.

Which is why, smaller, more frequent releases, with less friction and less overhead, makes a lot of sense. It’s your responsibility to make sure you don’t break production, and if you’re going to be responsible, don’t you want to make smaller bets? That leads to this tenet …

Deploying code in smaller and smaller pieces is another way. In abstract, every single line of code you deploy has some probability of breaking the site. So if you deploy a lot of lines of code at once, you’re just going break the site.

And you stand a better chance of inspecting code for correctness the less of it there is.

There’s a lot of goodness in this presentation, resulting from the scars of helping to drag an engineering team into something that works, that has buy in, and increases the velocity and performance of the team (and helps keep everybody happy because they’re working on stuff that actually gets to production). There’s some bits towards the end of the presentation that make sense for one big team, but less sense for multiple teams. Multiple teams is a huge way to help solve this problem. If you can break up your application into smaller, separate applications, or services, or microservices, or trendy term du jour, then you can reduce your dependencies between teams.

That lets each team reduce it’s risk and some teams can ship 50 times a day, and some 10, and some 2. It increases a bit of coordination between teams, but with good documentation and smart API design (ideally with good versioning so that team releases don’t have to be coupled), you can get to a point where teams can all be really efficient and not beholden to the slowest of teams.

Anyway, it’s a long presentation, but I think it’s a really great, real world example of how to get a big challenging org into CD (or at least on the path to it).

Making RSS More Readable

The JSON Feed format is a pragmatic syndication format, like RSS and Atom, but with one big difference: it’s JSON instead of XML.

For most developers, JSON is far easier to read and write than XML. Developers may groan at picking up an XML parser, but decoding JSON is often just a single line of code.

This is such a good, simple idea. In general, I hate dealing with XML (I actively bias against SOAP interfaces too). JSON isn’t more verbose than XML, but is decidedly easier to read, and far less fragile. I’ve added JSON feed to this very site.

If You Aren’t Already Using a VPN, Time to Start

I mean, everybody wants to make sure their ISPs can sell their data, right?

I was particularly saddened to see Rep. Massie on the list of those voting for this measure. Having worked for him (years ago), he is certainly smart enough to understand the technical implications here, but voted out of the idea that the free market was already doing a good enough job of this (i.e. Comcast won’t sell your data without your permission, for fear that you’ll leave for a competitor).

The problem is that, in great portions of this country, there’s no free market for ISPs. In most locations, it’s a local monopoly. I’m lucky: in my city, we have two cable providers, plus high speed fiber (fios). In the town I grew up in? One cable provider. And then DSL, if you live in the right spot. The house I grew up in? No DSL. No options.

Anyway, use a VPN. Most sites are using HTTPS these days, which is helpful, but your ISP will still know what name you looked up, what IP came back, and how long you were on the site. If you want to be careful, switch to an open DNS provider, and use a VPN. Most DNS providers will also use your data, but they will at least give you the option to opt-out. (As backwards as this sounds, I’d recommend Google Public DNS).[1]

For VPN, both Cloak and TunnelBear are reasonably cheap (probably less than you pay for 1 month of internet) and easy. Or, if you’re so inclined, roll your own.


  1. Google’s DNS privacy is pretty clear—“We don’t correlate or combine information from our temporary or permanent logs with any personal information that you have provided Google for other services.”  ↩

Was Sun One of the Powers That Created Captain Planet?

It took about 4 months of back and forth and permitting. Two and a half days of actual work on the room. A couple of visits from a friendly inspector to make sure everything was kosher. And, finally, a 30 minute visit from a nice tech to setup the wifi.

In the end, we’ve got an array of 26 solar panels producing energy on our roof (and setup in a location that you don’t really see from the street).

Screen Shot 2017 03 27 at 4 27 54 PM

Unfortunately, we’ve only had a couple of sunny days since then, but on a cold, but sunny, day in March, they produced about 40 kWh of power, which I think is more than what we’d use on a normal day. It’ll be interesting to see how we do in April and May. I’m optimistic this will have really nice returns for us.

So far the only real issue has been the monitoring software, Enlighten from Enphase. When working, it’s really nice. But, while my end is reporting pretty regularly, the website seems to go long whiles between updating. And, over the weekend, it just seemed flat out down. I’m hoping I can figure out a way to pull info from it. It looks like there’s an API, so I might be able to wire up a Homebridge plugin to pull data from it and then list usage on my HomeKit apps.

(And, no, Sun wasn’t one of Captain Planet’s people. Earth, Water, Wind, Fire, Heart. I guess maybe Fire counts?)

This is Why We Can’t Have Nice (Free) Things

There was a little internet kerfuffle last week when Matt Mullenweg from WordPress correctly pointed out that Wix was violating the GPL. Now, he did it in maybe not the nicest way (“If I were being honest, I’d say that Wix copied WordPress without attribution, credit, or following the license”), but at it’s core, his argument was true.

A core part of Wix’s mobile editor is forked from WordPress’ GPL licensed editor library.

And that’s pretty much all there is to it. In the end, if you use something that is GPL’d in your application, you walk a fine line of needing to open source and offer your source code under the GPL as well. The GPL is a viral license (GPLv3 particularly so), and including code licensed under it is, at best, something you should do with a close reading of the license. At worst, you simply just shouldn’t include any GPL code.

Wix’s CEO posted a response and completely missed the point. As did one of their engineers. They both seem to think that intent matters. While it does matter in that it helps us understand that there was probably not any malicious intent, the GPL is the GPL and it’s pretty clear.

As Daniel Jalkut says:

if you want to link your software with GPL code, you must also make your software’s source code available … You have to give your code away. That’s the price of GPL.

Many developers understand, and view the price of GPL as perfectly justified, while others (myself included) find it unacceptable. So what am I supposed to do? Not use any GPL source code at all in any of my proprietary products? Exactly. Because the price of GPL is too much for me, and I don’t steal source code.

In my office, we’ve basically made the same rule. Even though we don’t ship code, we still stay away from GPL’d code as much as possible, simply to avoid any chance of impropriety.

I look at the GPL like Dave Matthews Band. It sucks, there’s lots of other licenses just like it that are much, much better, and it’s fans are so annoying as to make it that much worse.

The Olympics and Blaming Millenials

(Note: I’m not sure why these two articles bugged me so much. But they did.)

There was a somewhat poorly written (or, at least, poorly titled) article on Bloomberg (shocker) about the down ratings for the Olympics on NBC. In typical Bloomberg fashion, it’s a clickbait title (who can resist blaming millenials), as the article itself points out that it’s the 18–49 demographic that saw ratings down (with no breakdown inside that demo to determine where the real drop was).

And in the 18-to–49-year-old age group coveted by advertisers, it’s been even worse. That audience has been 25 percent smaller, according to Bloomberg Intelligence.

In response, a millenial (presumably) attempted to defend his peers and lash out at NBC (though, really, it was more about the cable industry) and the inability for the cable/broadcast industry to meet the needs of cord cutters.

The issue I have with the article isn’t so much the argument. I agree that the cable and TV industries are going to have to change the way they think about the broadcast model. And, while it may not be changing as fast as we’d want, it’s changing incredibly fast!

Think about that ten years ago, being a cord cutter meant using an antenna, borrowing DVDs from the library, and maybe downloading a show from iTunes.

Today, you could conceivably use Hulu, Netflix, HBO, Sling TV, and iTunes, and probably cover everything except live sports. And ESPN may be going over the top in the next year. That’s progress.

The article, however, takes almost 1800 words to complain about how difficult it was to watch the Olympics online without a cable subscription and then complains about too many advertisements and the lowest common denominator announcing during the opening ceremonies, as if these are new things. And, while cord cutters are a growing audience, it’s still something like 80% of households who have cable. Cord cutters alone didn’t cause the audience to drop.

No, it’s not until the last segment of the article, which mostly hits on what I believe to be part of the real reason for the ratings being down:

It opened with Simone Biles and Co., but then, despite being filmed earlier in the day, inexplicably goes from the earlier rounds of Gymnastics to Swimming. Hours pass before we finally get to see the resolution to those Gymnastics rounds

The ratings were down because NBC couldn’t figure out how to show events in real time to both the East and West Coasts. With clips showing up online, on ESPN, on Twitter, the audience&emdash;millenials or not&emdash;couldn’t be bothered to stay up until 11:30pm ET to watch the gymnastics finals that had already happened that day. Or worse, for the half of the country on the West Coast, that had happened many hours before.

NBC’s real crime is not figuring out how to get more of the core live events in front of the audience when it was live. Live sports are the only thing left that really can keep audiences from cutting the cord, and NBC (while well intentioned with their wall-to-wall online coverage) forgot that.

In the end, Bloomberg incorrectly blamed millenials, and, in turn, millenials (or this millenial responded in the stereotypically myopic millenial way.

ScanSnap Directly to the Cloud

Last week, Fujitsu added an awesome feature to their ScanSnap scanner line (at least, the iX500 that I have). You can set it up so that, rather than having to have a machine on the same wireless network to pick up the scanned documents, the scanned documents just get shipped to your Dropbox or Google Cloud.

That let’s you do some really interesting things. You can run Hazel rules on your Dropbox folder, just like you can on a local folder, to do automatic sorting, naming, etc. on your machine. You can also do some interesting automation things with IFTTT to trigger other types of activity based off of files getting scanned. Or some combination of both (you scan some sort of receipt, it’s automatically filed into a folder via Hazel, which also triggers an IFTTT action to send an email to someone telling them that receipt is there).

The cloud feature seems small, but it’s a huge improvement to the convenience of what is already a device that has made my life a lot simpler.