Having fun with Git

I recently read The Git Book. As I went through the Git Internals parts, it struck me how simple and elegant the structure of Git really is. I decided that I just had to create my own little library to work with Git repositories (as you do). I call the result Silly Jgit. In this article, I will be walking through the code.

This article is for you if you want to understand Git a bit deeper or perhaps even want to work directly with a Git repository in your favorite programming language. I will be walking through four topics: 1) Reading a raw commit from a repository, 2) Reading the tree hash of the root of a commit, 3) parsing the file list of a directory tree, and 4) Reading the file contents from a subdirectory of a commit root.

Reading the head commit from a repository

The first thing we need to do in order to read the head commit is to find out which commit is the head of the repository. The .git/HEAD file is a plain text file that contains the name of a file in the .git/refs/heads directory. If you’ve checked out master, this will be .git/refs/heads/master. This file is a plain text file which contains a hash, that is: a 40 digit hexadecimal number. The hash can be converted to a filename of a Git Object under .git/objects. This file is a compressed file containing the commit information. Here’s the code to read it:

Running this code produces the following output (notice that some of the spaces in the output are actually null bytes in the file):

Finding the directory tree of a commit

When we have the commit information, we can parse it to find the tree hash. The tree hash references another file under .git/objects which contains the index of the root directory of the files in the commit. In the example above, the tree hash is “c03265971361724e18e31cc83e5c60cd0e0f5754”. But before we read the tree hash, we have to read the object type (in this case a “commit”) and size (in this case 237).

Looking at the tree hash file is not as straight forward, however:

The next part of this article will show how to deal with this.

Parsing a directory tree

The tree file has what looks like a lot of garbage. But don’t panic. Just like with the commit object, the tree object starts with the type (“tree”) and the size (130). After this, it will list each file or directory. Each tree entry consists of permissions (which also tells us whether this is a file or a directory), the file name and the hash of the entry, but this time as a binary number. We can read through the entries and find the file we want. We can then just print out the contents of this file:

Here’s an example of a parsed directory listing. I have not showed the octalMode for each file, but this can be extremely useful to separate between directories (which octalMode starts with 0) and files:

Reading a file

This leads us to the end of our journey – how to read the contents of a file. Once we have the entries of a tree, it’s a simple matter of looking up the hash for a filename and parsing that file. As before, the file contents will start with the type (“blob” – which means “data”, I guess) and file size:

This prints the contents of our file. Obviously, if you want to find a file a subdirectory, you’ll have to do a bit more work: Parse another tree object and look and an entry in that object, etc.

Conclusions

This blog post shows how in less than 50 lines of code, with no dependencies (but a small utility helper class), we can find the head commit of a git repository, parse the file listing of the root of the file tree for that commit and print out the contents of a file. The most difficult part was to discover that it was the InflaterInputStream and not Zip or Gzip that was needed to unpack a git object.

My silly-jgit project supports reading and writing commits, trees and hashes from .git/objects. This is just the core subset of the Git plumbing commands. Furthermore, just as I wrote the article, I noticed that git often packs objects into .git/objects/pack. This adds a totally new dimension that I haven’t dealt with before.

I hope that nobody is crazy enough to actually use my silly Git library for Java. But I do hope that this article gave you some feeling of Git mastery.

Posted in Code, English, Java, Technology | 2 Comments

Offensive programming

How to make your code more concise and well-behaved at the same time

Have you ever had an application that just behaved plain weird? You know, you click a button and nothing happens. Or the screen all the sudden turns blank. Or the application get into a “strange state” and you have to restart it for things to start working again.

If you’ve experienced this, you have probably been the victim of a particular form of defensive programming which I would like to call “paranoid programming”. A defensive person is guarded and reasoned. A paranoid person is afraid and acts in strange ways. In this article, I will offer an alternative approach: “Offensive” programming.

The cautious reader

What may such paranoid programming look like? Here’s a typical example in Java:

This code simply reads the contents of a URL as a string. A surprising amount of code to do a very simple task, but such is Java.

What’s wrong with this code? The code seems to handle all the possible errors that may occur, but it does so in a horrible way: It simply ignores them and continues. This practice is implicitly encouraged by Java’s checked exceptions (a profoundly bad invention), but other languages see similar behavior.

What happens if something goes wrong:

  • If the URL that’s passed in is an invalid URL (e.g. “http//..” instead of “http://…”), the following line runs into a NullPointerException: connection = (HttpURLConnection) url.openConnection();. At this point in time, the poor developer who gets the error report has lost all the context of the original error and we don’t even know which URL caused the problem.
  • If the web site in question doesn’t exist, the situation is much, much worse: The method will return an empty string. Why? The result of StringBuilder builder = new StringBuilder(); will still be returned from the method.

Some developers argue that code like this is good, because our application won’t crash. I would argue that there are worse things that could happen than our application crashing. In this case, the error will simply cause wrong behavior without any explanation. The screen may be blank, for example, but the application reports no error.

Let’s look at the code rewritten in an offensive way:

The throws IOException statement (necessary in Java, but no other language I know of) indicates that this method can fail and that the calling method must be prepared to handle this.

This code is more concise and if there is an error, the user and log will (presumably) get a proper error message.

Lesson #1: Don’t handle exceptions locally.

The protective thread

So how should this sort of error be handled? In order to do good error handling, we have to consider the whole architecture of our application. Let’s say we have an application that periodically updates the UI with the content of some URL.

This is the kind of thinking that we want! Most unexpected errors are unrecoverable, but we don’t want our timer to stop because of it, do we?

What would happen if we did?

First, a common practice is to wrap Java’s (broken) checked exceptions in RuntimeExceptions:

As a matter of fact, whole libraries have been written with little more value than hiding this ugly feature of the Java language.

Now, we could simplify our timer:

If we run this code with an erroneous URL (or the server is down), things go quite bad: We get an error message to standard error and our timer dies.

At this point of time, one thing should be apparent: This code retries whether there’s a bug that causes a NullPointerException or whether a server happens to be down right now.

While the second situation is good, the first one may not be: A bug that causes our code to fail every time will now be puking out error messages in our log. Perhaps we’re better off just killing the timer?

Lesson #2: Recovery isn’t always a good thing: You have to consider errors are caused by the environment, such as a network problem, and what problems are caused by bugs that won’t go away until someone updates the program.

Are you really there?

Let’s say we have WorkOrders which has tasks on them. Each task is performed by some person. We want to collect the people who’re involved in a WorkOrder. You may have come across code like this:

In this code, we don’t trust what’s going on much, do we? Let’s say that we were fed some rotten data. In that case, the code would happily chew over the data and return an empty set. We wouldn’t actually detect that the data didn’t adhere to our expectations.

Let’s clean it up:

Whoa! Where did all the code go? All of the sudden, it’s easy to reason about and understand the code again. And if there is a problem with the structure of the work order we’re processing, our code will give us a nice crash to tell us!

Null checking is one of the most insidious sources of paranoid programming, and they breed very quickly. Image you got a bug report from production – the code just crashed with a NullPointerException (NullReferenceException for you C#-heads out there) in this code:

People are stressed! What do you do? Of course, you add another null check:

You compile the code and ship it. A little later, you get another report: There’s a null pointer exception in the following code:

And so it begins, the spread of the null checks through the code. Just nip the problem at the beginning and be done with it: Don’t accept nulls.

By the way, if you wonder if we could make the parsing code accepting of null references and still simple, we can. Let’s say that the example with the work order came from an XML file. In that case, my favorite way of solving it would be something like this:

Of course, this requires a more decent library than Java has been blessed with so far.

Lesson #3: Null checks hide errors and breed more null checks.

Conclusion

When trying to be defensive, programmers often end up being paranoid – that is, desperately pounding at the problems where they see them, instead of dealing with the root cause. An offensive strategy of letting your code crash and fixing it at the source will make your code cleaner and less error prone.

Hiding errors lets bugs breed. Blowing up the application in your face forces you to fix the real problem.

Posted in Code, English, Java | 11 Comments

A canonical web test

In order to smoke test web applications, I like to run-to-end smoke tests that start the web server and drives a web browser to interact with the application. Here is how this may look:

This test is in the actual war module of the project. This is what it does:

  1. Configures the application to run towards a test database
  2. Starts up the web server Jetty on an arbitrary port (port = 0) and deploys the current web application into Jetty
  3. Fires up HtmlUnit which is a simulated web browser
  4. Inserts an object into the database
  5. Navigates to the appropriate location in the application
  6. Verifies that inserted object is present on the appropriate page

This test requires org.eclipse.jetty:jetty-server, org.eclipse.jetty:jetty-webapp and org.seleniumhq.selenium:selenium-htmlunit-driver to be present in the classpath. When I use this technique, I often employ com.h2database:h2 as my database. H2 can run in-memory and so the database is fresh and empty for each test run. The test does not require you to install an application server, use some inane (sorry) Maven plugin or create any weird XML configuration. It doesn’t require that your application runs on Jetty in production or test environment – it work equally fine for Web applications that are deployed to Tomcat, JBoss or any other application server.

Please!

If you are developing a web application for any application server and you are using Maven, this trick has the potential to increase your productivity insanely. Stop what you’re doing and try it out:

  1. Add Jetty and HtmlUnit to your pom.xml
  2. Create a unit test that starts Jetty and navigates to the front page. Verify that the title is what you expect (assertEqual("My Web Application", browser.getTitle()))
  3. Try and run the test

Feel free to contact me if you run into any trouble.

Posted in English, Extreme Programming, Java, Software Development, Unit testing | Leave a comment

Om å løse alt bortsett fra det egentlige problemet

“Problemet med Java er at det krever så mange abstraksjoner. Factories, proxies, rammeverk…” Min samtalepartner gjenfortalte inntrykket han hadde av de Java-programmerende kollegene sine.

Jeg måtte innrømme at jeg kjente meg igjen. Kulturen rundt Java-programmering har noen sykdomstrekk. Kanskje det minst flatterende er fascinasjonen for komplekse teknologiske løsninger. Et gjennomsnittlig Java-prosjekt har rammeverk (Spring, Hibernate), tjenestebusser – gjerne flere (OSB, Camel, Mule), byggverktøy (Maven, Ant, Gradle), persisteringsverktøy (JPA, Hibernate), kodegeneratorer (JAXB, JAX-WS), meldingskøer (JMS), webrammeverk (JSF, Wicket, Spring-MVC) og applikasjonsservere (WebSphere, WebLogic, JBoss). Og det er bare starten. Hvor kommer denne impulsen fra?

Jeg har hørt to teorier om hvorfor Java-programmerere ender opp med en kompleks hverdag. Den ene skylder på sjefene, mens den andre skylder på programmererne. Jeg vil ta for meg begge og peke på en vei ut av jungelen.

Teori 1: Steak and strippers

Zed Shaw, mannen bak “The motherfucking manifesto for programming, motherfuckers” skylder på IT-sjefene. Han mener at selgerne fra teknologiske giganter tar med IT-sjefer ut på strippebuler og kjøper biffmiddager til dem. Og vips så kjøper IT-sjefen inn et ubrukelig verktøy som programmererne er tvunget til å bruke. (Zed påpeker at det vil være positivt med flere kvinnelige IT-sjefer, ettersom de i det minste ikke vil være like interessert i strippebulene)

Ref: Zed Shaw: Control and responsibility (http://zedshaw.com/essays/control_and_responsibility.html). Zed Shaw: Programming, motherfuckers (http://programming-motherfucker.com/). Zed Shaw: Video – Steak and Strippers (http://vimeo.com/2723800).

Argumentet er underholdende, men forklarer bare en del av problemet. Mange av verktøyene som står bak kompleksiteten i Java-prosjekter er åpen kildekode og selges typisk ikke av selgere med fete representasjonskontoer.

Teori 2: Blinkende lys

Jeg tror en viktigere teori er at programmerere er opptatt av fancy ting som blinker. Enkle ting som firmaet tjener penger på er kjedelige. Å sette seg ned og flytte data fra databasen og putte det i en webside er kjedelig for en programmerere. Å lære seg et nytt rammeverk som gjør den samme jobben på en fancy måte er spennende. Å fjerne teknologier og lage en enklere løsning er kjedelig. Å innføre et rammeverk som skal skru sammen teknologien er spennende.

Alle foretrekker å gjøre det de synes er spennende. Jeg vet det, for jeg har vært der selv.

Grunnleggende sett tror jeg at Java-programmere selv har skaffet seg de problemene de ofte klager over med komplekse teknologier.

Veien ut av villmarken

For meg var det viktigste skrittet ut av villmarken foredraget jeg hold på JavaZone for to år siden. I foredraget bygger jeg og Anders Karlsen en webapplikasjon i Java uten å bruke webrammeverk. (Innbild deg at du hører et ironisk gisp her)

Jeg har øvet meg på å løse de problemene jeg har i prosjekter med å bruke enklere teknologier. Slik vet jeg hva de komplekse løsningen gjør. Og så langt er det nesten ingen løsninger som ikke innfører mer problemer enn de fjerner. Det de gjør er at de flytter fokuset fra den oppgaven programmet egentlig skulle løse over til alle de teknologiske delene som man skal få til å fungere sammen.

Jeg tror ikke programmerere er bedre eller dårlige enn andre mennesker når det gjelder det. Men vi har alle en tendens til å bruke tiden vår til å finne spennende problemer å løse i stedet for å løse det vi egentlig skulle på en naiv og enkel måte. Min far sa det egentlig best: Du bør ikke bruke en kalkulator før du kan løse de samme regnestykkene på papir.

Posted in Java, Non-technical, Norsk, Software Development | 27 Comments

Only four roles

Many sources of stress on projects come from forgetting what our roles are. Scrum championed a simple set of roles with the development team, the Scrum Master, and the Product Owner. The first problem is the people affected by agile projects who fall into any of these categories, many of which are important. The second problem comes from forgetting that the only roles with authority, the Scrum Master and the Product Owner are the least important people on the whole project.

When creating something of value, the first people we care about are those who will get value from the product we create. I call these Consumers. These are the users and those who are affected by the work of the users. For a call center application, it would be the customer service representative as well as the person calling who has to wait for the service rep to look up information in the slow system.

Without caring about the Consumers, the product has no value.

The second category are those whose work goes into creating the product. It may be people who create layouts and graphics, people who develop the application, people who examine the application to make sure it performs as needed and people who train the consumers in using the application. I call these people the Creators.

Without the Creators, there will be no product.

But in order to create the product, someone usually has to put money on the line. I call this the Sponsor(s). The sponsor is the person who can really decide that, “yes, we will let five people work on this for a year”. If the Creators work for free, they are also the Sponsors. Otherwise, the sponsor is the person who signs their paycheck.

Without the Sponsor, the Creators will starve.

It’s worth noting that many Product Owners, Scrum Masters, Architects and Project Managers fall into none of these roles. The product owner is seldom an actual Consumer of the product, and in very few cases does he pay the salary of the Creators. Instead, he talks to the Consumers and helps the Creators understand what to create. In the same way, a good Scrum Master can ask good questions of the Creators that will help them avoid impediments and work better.

I call everyone who doesn’t Consume the product, Create the product or Pay for the product a Helper. When you facilitate a meeting, write a report or take the requirements from the Consumers to the Creator, you are helping. If you’re doing your job right.

The funny thing is this: Most people with authority in most organizations have Helper roles. But nothing is worse than a “Helper” you don’t need, but who insists that you do what they say.

I am a Helper, and this makes me nervous. If everybody is a Helper, nothing gets done. At best, I can make others better able to do their job. At worst, I distract from real progress.

Helpers must be humble

Posted in English, Extreme Programming, Software Development | 3 Comments

A jQuery inspired server side view model for Java

In HTML applications, jQuery has changed the way people thing about view rendering. Instead of an input or a text field in the view pulling data into it, the jQuery code pushes data into the view. How could this look in a server side situation like Java?

In this code example, I read an HTML template file from the classpath, set the value of an input field and append more data based on a template (also in the HTML file).

This is a simplified version of the HTML:

This is a third way from the alternatives of templated views like Velocity and JSP and from component models like JSF. In this model, the view, the model and the binding of the model variables to the view are all separate.

Disclaimer: In this example, I’ve used my still in pre-alpha XML library with the working name of Eaxy. You can get similiar results with libraries like jSoup and JOOX.

Caveat: I’ve never tried this on a grand scale. It’s an idea that compels me for three reasons: First, it’s very explicit. Nothing happens through @annotation, conventions or some special syntax in a template. Second, it’s very unit testable. There is nothing tying this code to running in a web application server. Finally, it’s easy to get to this code through incremental steps. I initially wrote the example application with code that embedded the HTML as strings in Java code and refactored to use the Java Query approach.

Could this approach be worth trying out more?

Posted in Code, English, Java | 3 Comments

Scrum as an impediment to Agility

As I’m working with smaller and more agile projects, I’m increasingly seeing the classic way that Scrum is executed as more of an impediment to agility than a helper.

This is especially the case when it comes to the classic Sprint Planning as described in the Scrum Guide:

  • “For example, two-week Sprints have four-hour Sprint Planning Meetings”
  • In the Sprint Planning Meeting part 1: “Development Team works to forecast the functionality that will be developed during the Sprint.”
  • In the Sprint Planning Meeting part 1: “Product Owner presents ordered Product Backlog items”
  • “Work planned for the first days of the Sprint by the Development Team is decomposed to units of one day or less by the end of this meeting”

I’ve seen many sprint planning meetings struggle for the same reasons again and again:

  • The user stories described by the product owner doesn’t fit the team’s way of working
  • The team dives into too many details on each user story to be able to break it down to the level required
  • The team blames the product owner for not providing enough details to the user stories
  • Most of the design discussions are considered to be over once the sprint starts
  • The forecasting/commitment to future velocity becomes a heated negotiation

If your project experienced these sorts of Sprint planning meetings, I would expect that the reaction of the project was to add meetings (“backlog grooming”), documentation and checkpoint prior to starting a new sprint. These activities would probably resulted in the product owner (team) spending less amount of time with the development team.

Scrum’s Sprint planning is assuming a situation where the product backlog is detailed for a considerable amount of time and where the ideal is that the product owner spends their time adding more details to the product backlog all the time.

The resulting projects have huge rigid backlogs describing the details for several months into the future. They communication between the users and developers is limited to the acceptance criteria that the product owner writes down before each sprint planning. They spend a considerable amount of the sprint planning the rest of the sprint. Deviations from the sprint backlog are considered problematic.

I think this is misguided. I think this is why we left waterfall in the first place.

In order for Scrum to work better, we have to abandon the idea that the product owner comes to the planning with a perfect set of stories, we have to abandon the sprint backlog detailing the work and design for several weeks and we probably should be very careful with what estimates we ask for.

Instead I would suggest the following approach to planning a sprint:

  • The product owner and the team comes into the room informed by their current understanding of the value the system can deliver
  • The product owner describes the current most important gaps in the value available to stakeholders
  • The team already knows their current trajectory and together with the product owner, they can describe “what’s the next meaningful thing we could demonstrate to closing these gaps” as a script for the next demonstration
  • The team isn’t asked to estimate their work, but the product owner, project managers and others are free to make qualified guesses based on the team’s past performance
  • Keep it short and frequent!

Scrum was developed in the time where it had to match the perception of projects that did huge batches of planning and design. In response, it does smaller batches of planning and design. But “give a man an inch and he’ll take a yard”. The smaller batches leads to frustration over lack of details and the sprints become more and more plan driven and the connection between the users and the developers more and more document-driven.

A new approach is needed.

Posted in English, Extreme Programming, Software Development | 8 Comments

How to start an agile project

During the recent panel debate in Colombo Agile Meetup my colleague Lasantha Bandara asked the following question:

How do you start an agile project and ensure room for future enhancements? How can we achieve flexibity at the beginning?

This is my answer:

Flexibility is about having an acceptable cost of change. Sometimes, the best cost of change is to create something and throw it away early to try something else if it doesn’t work out.

The first technical decision I make on projects is what I call the deployment model. That is: How will the users access the application and where will the data reside. The most common categories of deployment model include web, disconnected client, rich client with server, and mobile client. There are other less common as well, for example an email responder. A mobile web client could also be considered a category of its own and so may perhaps a rich JavaScript web application.

As Sabri Sawaad said during in the panel: You must make decisions, but consider the cost of changing them.

You have to choose a deployment model and client-server communication protocol before you write any code that you can demo. Often, the project comes with an assumed deployment model, but it’s not always set in stone. For example, a project where the customer was aiming for a traditional web application, we suggested a JavaScript application due to the skillset of the developers. In another project, we challenged the customer to try a rich client instead of a web application, but after the first sprint, we discarded the code and did the web application instead. In a final example, discussing the requirements further with the client it turned out that a web client was more appropriate than the initial idea of a mobile application, so we had to replace team members to get the correct skillset.

As an architect, it’s part of your job to help the customer find a deployment model that matches the requirements of the users and the skillset of the team.

On the panel, Subuki Shihabdeen and Buddhima Wickramasinghe both pointed out the importance of early feedback. In my opinion, this implies that the beginning of a project is a bad time to learn new technologies. When you know the deployment model, use the technologies and frameworks you are familiar with to quickly show something to the customer. If this means delaying using a database, as Hasith Yaga suggested in the panel, do this. If it means using Entity Framework and SQL Server, because the team is familiar with this, do that. But as Sabri said: Invest in strategies to minimize the cost of changing these decisions, such as encapsulating the data layer.

In order to quickly choose and roll out a first demo, the only think I’ve found to work is to practice. I’ve created applications repeatedly from scratch in many frameworks and languages for practice. Each time I do it faster.

So here is my process, summarized:

  1. Spend time to practice technologies so you can choose good options with confidence
  2. Help the customer choose an appropriate deployment model based on the user needs and developer skills (including your own)
  3. Put something in front of the customer quickly by using your existing skills
  4. Invest to reduce the cost of changing decisions on frameworks and technologies
  5. Change course if you find a better technology or even deployment model
Posted in English, Software Development | 4 Comments

A canonical Repository test

There are only so many ways to test that your persistence layer is implemented correctly or that you’re using an ORM correctly. Here’s my canonical tests for a repository (Java-version):

Very simple. The samplePerson test helper generates actually random people:

If your data has relationships with other entities, you may want to include those as well:

A simple and easy way to simplify your Repository testing.

(The tests use FEST assert 2 for the syntax. Look at FluentAssertions for a similar API in .NET)

(Yes, this is what some people would call an integration test. Personally, I can’t be bothered with this sort of classifications)

Posted in Code, English, Extreme Programming, Java, Unit testing | 5 Comments