But the matrix is also incomplete in how little it addresses the collaborative abilities of the coder. Not everyone’s a Woz or a Carmack. For us mere mortals, being able to work together on a project can be a force multiplier. Having a distributed version control system (DVCS) like Git is like gaining the efficiency of the assembly line plus the creativity of crowdsourcing. Being able to work with people across time zones and cultures, as GitHub enables one to do, is a transferable skill. This post is about how Git can be a transferable technology, with uses outside of the programming sphere. It is related to a Thunder Talk I gave at the Matchbox Studio on February 6th, 2016.
From the Git-go
I am not trying to make a comprehensive guide to Git, nor even cover basic commands, there are other sources online that do that much better than I can. If you look at the embedded slideshow, I point to try.github.io for getting into the commands, as well as Emma Jane Westby’s various resources for learning Git. In this post I want to go over Git conceptually, and show how I used Git for application development of a different sense: job application development. I will refer to the slideshow below to help illustrate my points.
Slide 3 shows how most people do version control. They rename the file to reflect what change they made, or for what purpose they’ve revised the document. This gets unwieldy after a certain point, and God forbid you need to share this work with someone else. If you’ve ever worked on a group project where e-mail attachments are flying around with names based on whoever last touched the file, I’m sorry for inducing PTSD.
To resolve this, centralized version control systems provided a server for people to collaborate on code together. To help save on disk space, they would record the deltas (changes) made to the files instead of whole copies of the files. Workers could branch off and work on certain features without affecting the main code base, then merge things back into the fold when they were done. Like a library, you could check out and check in code to prevent people from stepping on each others’ work in the same file.
These systems, like CVS and Subversion, worked well for a while. But they also had their bottlenecks and single points of failure. So, in comes Git (and Mercurial, and Bazaar). I am focusing on Git because of the mammoth market share it now has; it has become the new lingua franca of version control.
A programmer wants to work on a project, they can clone it to their personal repository. This sets up a relationship between their copy of the code and any subsequent work that is done on the originating project. It’s a best practice to clone the code onto a separate “local” server, then pull another copy of the database onto one’s actual workstation.
Once a developer has a project clone, they can navigate the history of the project, but most commonly they will “branch” off the project for whatever revision they want to add to the code. They then “check out” the branch to make sure the code they contribute is isolated from the project. The revision control system reinforces modularity.
As the programmer completes functions they commit the work back in to their local repository. These commits are snapshots of the filesystem layouts of the project. Slide 9 shows how each revision (the folder numbers) will record deltas to the file, while unchanged files provide a pointer to the previous commit of the file.
Once they’ve reached a certain level of functionality and feature testing, the developer can push their code back up to their personal repository. If the developer wants that code branch to be integrated into the original project (the “origin” repo), they call for a pull request. (Slides 10-16).
This point shifts the focus from the merits of a version control system to the merits of a distributed VCS. In a collaborative context, a project core member can take a look at the pull request by pulling the branch into their own personal repository. They can look at the functionality of the code, see if it complies with the style guide, is well-tested and documented, and merge the code into the development branch of the project.
A project that really has its act together can have hooks, automated scripts that can push the new development code up to a continuous integration server, running the code through the gauntlet of unit tests and style guide scripts to truly make sure the contribution is up to snuff. It the Test/QA Server finds the code up to muster, it can be merged into the master branch, which is what most end users end up using. Slides 17-21 illustrate this process, but if you want more information, don’t overlook Emma Jane Westby’s Git for Teams. I link to her site in the slideshow as well.
I keep a MarkDown-formatted résumé, a master list along the lines of a CV to chart my skills, experiences, and accomplishments, which I can then whittle down to what is most pertinent to the position for which I’m applying. If the position calls for networking knowledge, then I will slide my Cisco certs up the batting order. If I am looking at a systems administrator job, I will rearrange the bullet points so that my Linux and Windows Server projects are more prominent. If I am trying for a development role, I will mention the hackathons and open source projects I’ve contributed to. Plus, GitHub profile is the programmer’s portfolio of the 2010’s. I also have a macro in Notepad++ which strips all of the distinctive MarkDown characters if I need a truly plain-text document to work with. As I iterate and remix the resume, I create a branch structure in employer/position format to keep things straight. If I add something like a new skill or certification, I can just update the master branch, then pull it into the other branches. It is easier for me to keep a resume straight with this structure than to navigate up and down directories, renaming things based on when and where they were submitted. What are your thoughts on this approach to résumé revision control?