Here are the slides:
Hashes seem simple. Set a key to a corresponding value, retrieve the value by key. What else is there to know?
A whole lot, it turns out! Ruby makes several surprising choices about how hashes work, which turn hashes into dynamic powerhouses of functionality.
We’ll dive into how hashes work, understand how they form the groundwork for some extremely useful standard library utilities, and learn patterns to leverage the unparalleled versatility of the humble hash to write concise, performant, beautiful code.
]]>I’ve given a few talks before, including several at previous RailsConfs, but this time was… different, somehow.
I’d like to share my experiences with you, in the hope that you’ll learn something of value along the way.
The process that led to my talk, The Stories We Tell Our Children, began circa 2.5 years ago, when my wife bought our first Israeli children’s book, A Tale of Five Balloons by Miriam Roth. I started to read it to my daughter nearly every night, and she loved it, but I honestly was bothered by the book. It was sad through and through, and I couldn’t figure out why it was clearly a popular book. It took me months to learn how it had become popular, and even then I wasn’t fully comfortable with it. It took at least a year to make my peace with the book, and to start to appreciate and even love it.
Through the process, I started to see how literature and the society within which it exists are intimately connected. It made me reconsider a lot of the American early childhood literature I grew up with. And the thoughts began to percolate around whether this could be something worth exploring in the context of a conference talk.
Time went on, and I began to read further. More Israeli books came into my life, and I continued to follow the pattern and see the connections. I started to seek out Israeli children’s classics actively, and try to perceive how each connected with Israeli history. I also read some literary criticism, mainly that of Dr. Shimona Fogel, which helped clarify things for me as well.
About 9 months ago, I decided I had enough material for a talk, and I had a sense of what I wanted to talk about. I submitted the topic to the RubyConf CFP, and was waitlisted, meaning I still got a free conference ticket, but would only give the talk if someone backed out at the last minute. No one did. In retrospect, this was a blessing; the talk was still somewhat raw and unrefined. I’m happy I got a few extra months to let things settle more in my head and on my slides.
Since I had been waitlisted rather than rejected, I already knew my proposal was pretty good. So I edited the proposal a bit more, and submitted it to RailsConf.
In case you’re wondering, I didn’t submit the talk to smaller conferences, for two reasons:
Small conferences tend to be on weekends. I don’t go to conferences on Saturdays for religious reasons, and I also don’t like the idea of making a long trip to a 2-day conference where one day is Saturday. Add the various Jewish holidays, and a great many conferences simply aren’t relevant possibilities. RubyConf and RailsConf are wonderful exceptions to this rule, since they are specifically held midweek so as to interfere minimally with attendees’ lives and to let them have full weekends before and after the conference.
Even if a relevant conference could be found, I had a sense this talk was special. It was certainly special to me. And I wanted to make sure I get to share it with as many people as possible. RubyCentral conferences would be my best shot at sharing these ideas with a large group, between the attendees and the many people who watch the Confreaks videos afterwards.
At any rate, one day I got an email from the conference organizers, stating that they’d decided my talk would make a great keynote, and they’d like me to take one of the keynote slots. Especially given that I’d been waitlisted last time, this wasn’t what I had expected, so of course I was thrilled to have the opportunity!
However, I was a little bit concerned. I knew I’d be addressing some Israeli history, which can be a sensitive and emotional topic for many people, and I definitely didn’t want to ruin someone’s conference experience by saying something insensitive or biased. Furthermore, because it’s a keynote rather than one talk of many in a multi-track slot, it wouldn’t be a talk people chose to attend. Keynotes also generally don’t get an abstract published in advance, and even whether to include the title in the program was a choice. (I decided yes.) So I wanted to make sure I’d have resources available for content review. Once I received confirmation from the RubyCentral team that this would be made available to me, I accepted the slot and started improving the talk.
I’ve given several talks before, but this was a keynote. That means a bigger audience and higher expectations. So I set a higher bar for this talk and made every attempt to meet it.
I knew more or less what I wanted to say, but somehow all sorts of ancillary information always manages to creep its way into the talk. And there were some things I wanted to discuss which would have been ill-advised in consideration of the goal of not conjuring up uncomfortable feelings for people during my talk.
I expanded and pared down, again and again, until I felt pretty ready. Sadly, my talk was significantly longer than the recommended 40-50 minutes, so aside from wanting general feedback, I needed advice on what to take out.
Some folks from the US and UK were visiting the Cloudinary Israel office, and I figured it would be a good opportunity to get feedback from an audience that would be fairly representative of the people who would ultimately hear the talk. I invited the visitors, and ended up with a room of 5 participants in the audience. I got some great feedback, and ended up making many significant changes (additions and deletions) in response to their comments.
I rehearsed the talk once every 1-2 days for two weeks, each time tweaking a bit more. Once I felt comfortable with the newly edited version, I did a second round of testing on a group of my Israeli peers. They corrected a few things which I had confused due to being less sensitive to some finer points about Israeli culture. (I am, after all, an immigrant!) They also made a few suggestions to help me make it appeal more to the audience.
The most important bit of feedback I got in both rounds was to make sure to continuously ground the historical/cultural content in something practical. One of my reviewers put it this way: I was forcing people to maintain a long buffer, to remember a large percentage of the talk over many minutes, in order to understand the takeaways at the end. This made it harder for an audience member to maintain focus and understand what’s important to hold onto over time.
I sent a video of a rehearsal to the organizers and noted the parts that likely needed content review. It tooks some back-and-forth to make the review happen, and it wasn’t completed until the day prior to the talk, but thankfully it worked out. My test audiences’ feedback had already put me in a place where no further changes would be necessary. This was a great relief for me as the time drew near.
In the meantime, I continued making small tweaks, even on the plane ride, or in the conference hotel, and even the morning of the talk when something clicked in my head and I wanted to include the idea.
Some people have to “seal” a talk some time before giving it, in order to rehearse a final version and feel comfortable onstage. For me, though, a talk is a living entity, not done being developed until the moment I’m onstage and it’s too late to make changes. It’s an outlet of my mind and my emotions, and it needs to reflect who I am and what I think and feel when I give the talk. That dynamism lets me bring my entire self to the presentation.
It was time. I headed over to the convention center, and reached the main ballroom 15 minutes early. I hooked up the microphone and laptop, and then spent a few minutes talking to some friends who had also arrived early. That helped calm my nerves a bit, but with around 5 minutes to go, I decided I needed some quiet time. I went backstage, and took a few moments just to breathe and relax. Abby, one of the organizers, got up to make the morning announcements, and then it was time!
I got up, took in the round of applause, took a deep breath, and started. The first couple of slides were completely scripted, which helped me get into the rhythm of the talk, and the rest was bullet points and notes to let me speak more naturally. I heard the crowd laughing, crying, even gasping at one point, and I fed on that energy and connection to bring even more of myself into the talk.
It’s a surreal experience, being in front of a huge crowd and conveying ideas you care about deeply. At some point, at least for me, the power of the ideas themselves begins to carry you. You feel like you’re floating, you forget that your body exists, you simply become a conduit for thought and emotion, via the medium of words.
My memories of the talk itself are fairly sparse. I know I covered all the slides, and I remember a few key moments in time, but most of it is a big blur. I was in another world, yet very much connected with the audience at the same time.
In the past, I tend to take audience questions on the side after the talk, and then take a break for a while, and later in the day start going to talks again. This time, I had a couple of talks I really wanted to hear. So I went and listened. And then it was lunch break, but instead of eating I went back to my room, took a shower, put on more comfortable clothes, and went back to the conference with a clearer head. Still, it took a few hours to feel completely back to normal.
I was overwhelmed and often surprised by people’s reactions to my talk. I imagined I’d get some interesting responses, but even so, a few things caught me by surprise. Some highlights:
Many other people approached me in the days following my talk. Some wanted to connect on a more practical level; I was invited to be a guest on two podcasts, which should be a fascinating experience. Mainly, though, people wanted to share all kinds of reflections, mostly centered around being more aware of the messages in children’s books and in the stories we tell.
I’m still unpacking the experience, but I’ve drawn a few lessons from my little adventure.
First is the power of vulnerability. This talk put me in a potentially very vulnerable place. It involved many of my own feelings, both about literature and about the society in which my wife and I have chosen to live and raise our children. But I found that people were very receptive to my honesty and openness about myself. There are certainly limits, and I needed to take great care to ensure I wouldn’t trod on someone else’s feelings. But I was amazed at how positively people reacted to the talk and the content.
Next is the value of letting ideas simmer. This talk came to be over a period of years, as the result of a personal journey. Even once I put together the initial abstract, nearly a year passed before I gave the talk onstage. The passage of time allowed me to clarify my goals in the talk and what exactly I wanted to convey in the time allotted.
Third is the value of testing. I thought I had a pretty good talk, but I found out I was too close to the material. Hearing the reactions of a test audience helped me to notice the ways in which my talk could be improved and made more valuable for them. Because this isn’t about me getting to spew at people. It’s about what they carry with them when they exit the room.
Fourth, I was prompted a few times to make the talk feel more relevant. It’s easy for a speaker to forget that you need to have a strong and clear answer to the question, “Why should I care about any of this?” And it needs to be stated early and often. I think my talk was much better received because I chose to scatter some of the takeaways and lessons throughout, rather than concentrating everything at the end of the talk.
Finally, I was intrigued to see how different people took different things away from the talk. Some focused more on issues of literal children’s literature, while others wanted to talk about messaging in media and in programmer culture. Everyone listened to the same talk, but heard something a bit different. This reminded me that as a speaker, you can’t control what your audience will hear, but that’s OK. Maybe everyone hears what they need to hear at that time. It’s a gift to them; they choose what to do with it. And my job as speaker is to give up the illusion of control and allow each attendee to interpret in their own way.
Thank you, dear reader, for listening. I hope it was worth the time and energy you’ve invested, and you’re a bit wiser now than you were before reading. I’d love to hear what you learned from my experiences in the comments.
]]>And the slides:
Every society has its own stories, which draw on the specific characteristics of that group of people and speak to their emotional underpinnings. We might not even notice it in our own surroundings, but it becomes quite apparent when studying the literature of other cultures.
In that vein, we’ll examine several Israeli children’s classics, seeing how they reflect the unique history and culture of the country. Reflecting upon that example, we’ll think about what aspects of the Ruby community are reflected in the stories we tell the next generation, and which missing stories need to be told.
]]>For the last 2 years, I’ve run Dev Empathy Book Club, and the site hasn’t changed much. I’ve tried to keep it low-effort so I can focus on the community and the content we’re producing. One casualty of this was that the site, while simple, wasn’t very performant. (Google’s PageSpeed Insights gave it a very low score of 30/100 on mobile.)
I recently began working at Cloudinary, and I realized it’s pretty embarrassing that, as an employee of a company whose product is all about optimizing media on the web, I have a personal site that does a terrible job of it.
The final bit of encouragement came from fellow Cloudinarian Eric Portis, who published an article about Website Speed Test, a free tool from Cloudinary to grade image performance on your site. When I ran it against the Dev Empathy Book Club site, I saw that users had to download 1.5MB, which could be optimized down to 370kB, i.e. about ¼ of their weight. I also knew these images were being served directly from GitHub Pages, without any CDN, so on mobile devices the page load was pretty slow.
On top of all this, there was a good amount of render-blocking JS and CSS being downloaded without a CDN.
All this meant slower load times, and lower scores in search results. There was no good reason for it, except that I didn’t have the know-how to improve things, or the time to learn how to do it.
Cloudinary is a robust but easy-to-use service to upload, transform, and serve images and videos. The free tier contains way more than you’ll ever need for a simple static site, so it’s a great choice for e.g. personal sites with a few images you’d like to serve efficiently.
One awesome feature of Cloudinary which made this incredibly simple is the ability to auto-fetch images.
For example, consider this URL:
1
|
|
The URL consists of
1
|
|
which tells Cloudinary you want to fetch an image for the caplan
cloud (you
create a cloud with a unique identifier when you sign up for Cloudinary), and
the rest is the URL where the image can be found:
1
|
|
When you hit this URL, Cloudinary will fetch the image in the background, and begin serving via CDN.
Theoretically we could take all the images on the site and preface with the fetch
incantation, but there’s a better way. Cloudinary has another feature called
Auto Upload, which lets you create folders which are proxies for web locations.
So if we create a ninja_images
directory mapped to https://amcaplan.ninja/images/
,
the URL looks like this:
1
|
|
Much better! Here’s the result:
Now comes the fun part.
Cloudinary lets you edit images by adding transformations right into the URL.
For example, by adding /w_100
before the image location, we creates a
100-pixel-wide version of the same image:
1
|
|
You can crop, set gravity (focusing on a region of the image or on human faces), scale, add text layers or image overlays, and do a whole bunch more awesome stuff, just by adding to the URL.
This opens up the opportunity to create multiple versions for various breakpoints, driven via CSS. So if you take a large version as the original, you can tell Cloudinary to crop/scale the image as you see fit, no Photoshop skills required!
As one concrete example, here’s a large image for wide screens:
1
|
|
You’ll notice a couple transformations here: f_auto
, which chooses the most
bandwidth-optimized image format for the user’s browser, and q_auto
, which
reduces image size by degrading image quality without being noticeable to the
human eye. Those 2 transformations already reduce the image size from 874kB to
385kB, without any noticeable difference to the user!
But we can do better on mobile, where this many pixels still aren’t helping anyone. Here’s a scaled-down version for mobile:
1
|
|
In this case, we’re creating a tall image bounded at 480px width, centered on
what Cloudinary determines to be the most interesting part of the image, and
using a fill
approach to the crop (expressed as c_lfill
) to ensure we cover
the entire requested dimensions of 480x800.
There are many parameters and even more options for those parameters, but the documentation is quite thorough and the system is really powerful.
To see a real-life example for what this might look like, check out the CSS for Dev Empathy Book Club’s site on GitHub.
At first I assumed that Gravatars (we display a few) on the site would work the same way, but I soon realized there is a big problem with Gravatar. The URL for an image looks something like this:
1
|
|
with this result:
If I want a larger version, I just change the s
query param. So for a 400px
square, I’d use this URL:
1
|
|
Lacking the s
parameter, Gravatar defaults to an 80px square:
1
|
|
If you try to fetch a large Gravatar avatar with Cloudinary, here’s the result:
1
|
|
What happened? Cloudinary treats ?s=400
as a meaningless parameter passed to
Cloudinary, and doesn’t forward it to Gravatar.
This can be fixed, though, by URL-encoding the ?
character as %3F
, like so:
1
|
|
This technique should work for any characters you might need to include in the fetch URL.
However, that’s not the end of the story. What happens when someone updates their Gravatar image? Ideally, it would get updated on our site, too. But on the free plan, fetched images never change. (They can be configured to be updated on paid plans.)
It turns out that someone at Cloudinary thought of this, and therefore built
Gravatar support
directly into the platform. Unlike the fetch
and upload
image types we’ve
seen so far, there’s also a gravatar
image type which knows how to source a
high-quality image from Gravatar, and update it automatically, with a small
delay, when someone changes their avatar! (There are similar systems
for other social networks, including Facebook, Google+, Instagram, and Twitter.)
If you fetch images via Gravatar in this way, you can easily scale up or down
using the normal h_
and w_
parameters. So here’s that same 400px image of
yours truly, fetched via Cloudinary:
1
|
|
Of course, once you’ve done this, you can use f_auto
and q_auto
to optimize
images further and reduce bandwidth use. Neat!
One little-known fact about Cloudinary: They can serve anything via CDN, not just images and video! So if you have JS or CSS files, you can serve them through Cloudinary’s CDN in the same fashion as mentioned above for images: Set up an Auto Upload folder and reference those URLs instead of the place where they’re hosted on your site. So for example, instead of:
1
|
|
we reference:
1
|
|
(where css/
is a folder mapped to https://devempathybook.club/css/
). Note
that instead of image
as before, we write raw
to indicate that this should
be considered an unknown file type and Cloudinary shouldn’t try to do any image
processing with it.
Usually you’ll want to use a versioning strategy for your JS and CSS assets if you use a CDN, but the goal here was to be lazy on a static Jekyll site. Since there wasn’t much custom CSS and JS, I simply left a few files that are loaded directly from GitHub Pages, but things that won’t change frequently (or ever) are served via Cloudinary’s CDN. You can see the code here.
You might notice, if you looked at the code from the last section, that a number of lines were commented out. It turns out that the Jekyll template I used bundled with it a number of JS/CSS frameworks and plugins I didn’t actually use. Removing them reduced the total page load size, and makes the page run faster, since there’s less for the CPU to worry about. As they say, no code is faster than no code!
I wouldn’t call the site blazing-fast now, but its PageSpeed mobile score went up from 30 to 50 in a few simple steps taking a couple hours total. There are more things to optimize, but these quick tricks helped bring down page load time a lot already. Importantly, time to first paint on mobile was cut by about 50%. That’s a much better experience for mobile users.
So go out, try these tips, and let me know in the comments how you did!
As a reminder, I work at Cloudinary, so if you do find anything here difficult to implement, I can pass along your concerns to the right people… 😉
P.S. If you use Jekyll or some other blogging framework, and you have many images on your site, it may be worth going further with automation using a plugin. For example, jekyll-cloudinary lets you define transition presets, and does all the work to generate URLs for images at various screen sizes. It’s pretty magical. Of course, if it’s a dynamic site, Cloudinary has a host of SDKs which can do everything discussed here, and much more!
NOTE: Cloudinary did not ask me to write this. Nothing in this post should be taken as representing anyone other than myself.
UPDATED 2019-01-29: Added option to include URL-encoded characters in a fetch URL.
]]>No matter which way I explained it, she just kept getting confused. Why couldn’t she understand that making these changes would drastically increase response time on a critical endpoint? It was a simple workflow involving 2 microservices and a NoSQL database, and she didn’t even have to understand the details, just how they were connected together on a high level.
At some point, I realized: No one had ever given Sierra any level of technical explanation of the system whose development she was supposed to guide every day. Instead of going further with the conversation, I asked, “Why don’t we set up a meeting just to describe the basic outline of the system? Nothing overly detailed, just enough to allow us to have a conversation about how product concepts will impact the real-life product when they’re translated into code.”
To my surprise, she agreed. To my further astonishment, I actually enjoyed the meeting more than any other I’d had since beginning my software development career. We slowly built up a diagram of the parts of the system relevant to her job, clarified confusing points, and made sure every bit of explanation was clear to her.
At the end of the meeting, Sierra thanked me and said, “You know, no one’s ever done this for me. This is going to significantly improve my ability to come up with ideas and communicate with developers. I’d really like to understand more about the technical elements of the project, but there never seem to be opportunities for me to learn.”
At that moment, I realized something that has become a theme in my career: The most significant impact you can make on a product isn’t through design, code, marketing, sales, or customer support. It’s building bridges, enabling people of varied backgrounds and skills, each with their own perspectives and spheres of understanding, to work together through effective communication.
As I continued to mature and advance in my career, I ended up taking on the role of a more senior developer and mentor, as well as having more say in the work I was doing. Although I saw myself taking on more responsibility, there was no event that sparked a big change, until…
In late 2016, I attended RubyConf in Cincinnatti, where I heard Paulette Luftig’s talk, “Finding Your Edge Through a Culture of Feedback”. She ended with a few recommendations for books to read, and I decided that this should be the next step for me. There are many fantastic texts about developing skills in communication, team-building, empathy, and other soft skills; I’d be cheating myself of personal and professional growth if I didn’t take advantage!
A few days after the conference, I bought a number of books recommended in that talk, plus a few I’d seen suggested around the internet. And so the fun began…
Anyone wanna #BookClub on some of these?
— Ariel Caplan (@amcaplan) November 15, 2016
Nonviolent Communication
Difficult Conversations
Emotional Intelligence
& 2 Dale Carnegie classics pic.twitter.com/8eXPyzNTGK
I wanted more, so I ordered books online. Like a good programmer, I was approaching stack overflow…
As a reminder, if you want to join me in journeying through these empathy- (& sometimes dev- too!) related books, ping me, let's #BookClub! pic.twitter.com/z6IuWVDdbR
— Ariel Caplan (@amcaplan) November 29, 2016
It was difficult to stay motivated and keep reading, so I knew I had to actually get serious about making this thing happen. Also, I would get a lot more out of it if I could discuss my thoughts with others. So I decided that RailsConf 2017 would be the Big Bang for Dev Empathy Book Club. I designed a site, set up a book club on Goodreads, and walked into the conference ready to recruit.
Justin Herrick gave a workshop about team-building and communication, so he seemed like a good person to try; he was! Amy Unger approached me in response to a tweet and asked to join as well; I was certainly delighted to have another thoughtful voice actually approach me about the club! So we had a panel.
I also gave a lightning talk (which I’m far too proud of) about a silly little hack, and used the opportunity to plug the project. So we got a bit of free marketing.
Then everyone went home, and the real work began.
Reading a dense book and actually trying to incorporate some ideas into your life every 2 months is a reasonable but still significant commitment. Even more difficult is coordinating with other people to make time for a panel discussion, keeping online materials up to date, and generally promoting the project. Dev Empathy Book Club is important to me because I think it adds a much-needed voice of compassion and humanity to the commotion in our industry. But it takes a lot of time and effort.
We also needed to evolve. Goodreads has a very unfriendly system for managing a club, and the forums weren’t easy to use. So we moved to an open Slack channel. Then, we wanted to make it more interactive, less a pure announcements conduit, so we started a monthly open discussion in Slack.
We also needed to be flexible. With a small number of panelists (though we’re looking to grow!), sticking to a schedule has been a real challenge, resulting in 5 instead of 6 books covered over the last year. We’re hoping to do better this year, to provide a more consistent experience for participants in the club.
While being part of Dev Empathy Book Club, I took on an informal team lead role within my working group. Since we went through a few management-related books at the same time (The Mythical Man-Month, Radical Candor, Peopleware), I had the opportunity to think about the ideas conveyed and experiment in a real-life work environment. I saw that people respected me because I invested a lot of thought into how teams work in general, and how to make my specific team more effective and simultaneously much happier.
Then, a month ago, I was informed that I would have an opportunity to move into a management role at my company.
A year ago, I would have been terrified, if the opportunity had even been presented, which of course it wouldn’t have. I wasn’t ready. Now, while I didn’t walk into the role as an expert (it takes a lot of experience to get there!), I could consider myself educated enough to know how to learn the rest. A year of investment truly paid off.
Even if management will never be your thing, soft skills are the big differentiator between people who write code and people who solve real-world problems using code. They might save you wasted years of effort by solving the people problems that would otherwise have mandated countless lines of code. And they will definitely make you happier in your job. You have to deal with other people all the time; may as well learn to enjoy it and make the most of it!
Becoming a more empathetic, compassionate, kind person with better communication skills will probably be the most valuable investment in your career (and possible your life in general).
Is Dev Empathy Book Club right for you? If you’re looking for something concrete and consistent to add to your routine to develop personally and professionally, check it out!
If this isn’t the right time for you to join, that’s fine too. I’d love if you would share this post, and the club, with friends, or tweet about it with the hashtag #DevEmpathy. (Or just mention our Twitter account, @DevEmpathy!)
You can find out more information on our site, devempathybook.club.
]]>And the slides:
P.S. If you happen to subscribe to RubyTapas, you can also check out a brief, more technical take on the same material in my guest episode!
Ugh, documentation.
It’s the afterthought of every system, scrambled together in the final days before launch, updated sparingly, generally out of date.
What if we could programmatically verify that our API documentation was accurate? What if this helped us build more intuitive APIs by putting our users first? What if documentation came first, and helped us write our code?
With Swagger and Apivore as our weapons of choice, we’ll write documentation that will make your APIs better, your clients more satisfied, and you happier.
]]>This question is difficult to answer precisely because there isn’t a single answer. Sometimes the blame falls to technical debt which hamstrings scalability, the ability to ship new features, or the ability to respond to market demands. Other times it’s the lack of business model, which sinks the entire company. In certain situations, various parts of the organization not seeing eye-to-eye is the culprit; the lack of shared vision causes sales to over-promise, engineering to develop the wrong things, or marketing to pursue the wrong strategy.
The causes are many and varied, yet somehow as engineers we focus a lot on “Good Code” (however we choose to define it), which fails to address most of these problems. Why?
If I were to hazard a guess, I’d say it’s because we as technical people are trained (or believe we are trained) to understand issues of Good Code more easily than we can solve business challenges or organizational dysfunction. As humans, we tend to favor investing time in the problems we know how to solve rather than the problems that most need careful solving (Parkinson’s Law of Triviality). Good Code is a problem we think we know how to solve, so we try to solve it and forget about the larger questions that determine the success or failure of our endeavors.
Traditionally, we see the role of engineers as outputting high-quality software that meets a particular need. We then define “high-quality” in purely technical terms. This has to end.
The only point of writing software is to solve problems. In the context of a business, every bit of software writing should be meant to target one of three fundamental problems every business faces:
For the remainder of this post, I’ll include those 3 elements in the (badly defined) term “business value.” Other places on the internet may define business value otherwise; that’s fine, it’s just for this post.
If the purpose of software is to generate business value, it stands to reason that the quality of software is simply a matter of how much business value it generates. “Is it high-quality?” becomes a question of “How fit is it for purpose?”
That definition will probably make a lot of engineers uncomfortable. Isn’t my job to write code, and someone else can think about the business impact?
Sure, you could look at it that way. But that means that the fundamental question of whether your software is valuable—and, as I define it, high-quality—rests in the hands of other people without your input.
So think of it this way: The more you involve yourself in understanding, and maybe even influencing, the business elements of your project, the more effective you’ll be at creating the software your business/clients really need.
We do many things as engineers and as organizations to improve the quality of our software. I believe all these practices really target one or more of 3 primary objectives, which I term Usefulness, Sustainability, and Accuracy. (You’ll note that the acronym is USA. No, I didn’t choose the words for the acronym, it sort of just happened.) Let’s define these terms a bit better:
With these 3 major objectives in mind, let’s get into the weeds a bit and think about how they impact our day-to-day work.
Every team, project, and situation will have its own way of defining how various practices support (or don’t) the 3 objectives. I’ll just give a few examples of practices that I’ve found to be impactful on the teams where I’ve worked. Let’s start with a visual map of how I see things:
Without getting into the gory details (though I did give a talk about that), here’s a guide to interpreting that picture.
The blue circle on the bottom is probably easiest to understand. It includes a variety of practices designed to increase confidence that what’s in your head matches what’s in the code. This includes testing practices, programming language features, tools and techniques for reducing complexity, and increasing the number of programmers who see and interact with code before it’s committed and deployed.
The green circle on the top-right is about maintaining flexibility while avoiding elements of instability. Anything that makes it easier to build without breaking things, creating a tangled mess, or backing yourself into a corner (from a perspective of product development) goes there. Also included are practices that build the team, improve the skills of developers, and make it easy (in the context of a larger organization) to interoperate with other teams and/or move people across team boundaries.
The top-left red circle is about connecting our applications to their purpose. Probably the most important piece is “Focus on Delivering Value”; all else can be derived from it. The red circle is populated by practices that help you understand your users more effectively, keep their needs in mind as you code, and do the most important work first. There are elements of both making the solution that works for them (researching their needs, making it performant) and making the solution work for them (providing it when they need it, with appropriate documentation, and the ability to find what they need).
One non-obvious (and likely controversial) thing is the fact that I’ve put a number of technical practices into the red circle. I believe that when we have multiple people working on code, or we explicitly document how a system is to be used via integration testing, that helps us focus on the end value provided to the user, at least opening up space for having conversations about the business value created by our software. I don’t think we’ve fulfilled our obligation to the Usefulness objective just by doing those things, but they’re a good start.
I’ve also mentioned a few central practices, which are just my opinion (as is the rest of this map):
Again, these are just my own opinions, based on my experiences with these practices and how I’ve seen them utilized on the teams I’ve been part of. Your team will derive more or less, and different, benefit(s) from these same practices, and that’s normal and expected.
As an exercise, I’ve made a blank version of the map available in PDF, Keynote, or PowerPoint form for you to fill out with your own teams. I’d love to see how your maps stack up against mine!
Bob Martin, citing Kent Beck, wrote that the Agile Manifesto was intended “to heal the divide between development and business.” Unfortunately, 16 years later, that’s nowhere near a solved problem.
I believe that divide can be healed if we learn to speak a common language, relating elements of technical excellence to meeting business needs, showing how the things we care about as engineers are things everyone should care about. That means going beyond our technical peers to understand the needs of other parts of our organizations, and figuring out our role in meeting those needs.
If we learn to speak the language of business, just a little bit, we can expect to see a lot more understanding and respect coming in the opposite direction, from businesspeople to developers. Maybe we’ll even develop psychological safety and trust. Wouldn’t that be great!
We’re all in this together. Let’s start acting like it.
Note: Based on a talk given at RailsConf 2017. Check out the original talk here.
]]>And the slides:
You care deeply about code quality and constantly strive to learn more. You devour books and blogs, watch conference talks, and practice code katas.
That’s excellent! But immaculately factored code and clean architecture alone won’t guarantee quality software.
As a developer, your job isn’t to write Good Code. It’s to deliver value for people. In that light, we’ll examine the effects of a host of popular coding practices. What do they accomplish? Where do they fall short?
We’ll set meaningful goals for well-rounded, high-quality software that solves important problems for real people.
Download the exercise as a PDF, Keynote, or PowerPoint file.
Reflecting afterwards, I noticed a few mistakes I made in the presentation, and would like to note them here:
It certainly got me to release more material than has been my recent practice, but it’s worth analyzing the process, the outcomes, and the cost.
My usual blogging workflow (if such a thing can exist) looks something like this:
This leads to a smaller number of (hopefully) high-quality writings worth sharing. In contrast, my workflow this week looked more like this:
The result was a lot more content, but on occasion I wondered whether the stuff I was publishing was worth reading. Trying to be faithful to my arbitrary commitment, I may have pushed through an idea that wasn’t perfect, or wasn’t fully developed in my head yet.
Interestingly, though, I found that the very act of writing was less capturing thoughts and more creating them. Starting with just the nugget of a thought, the experience of writing allowed the thought to develop into a full-blown argument or hypothesis.
I know everyone writes differently. Some people start with an outline, then fill it in with details as they go. I have never worked that way as long as I can remember. In elementary school, when teachers would ask for an outline as the first step of an essay, I would hand in my outline, get it graded, then throw it out and actually begin writing. To me, the written word is a river; I go where it flows. My words and myself are partners in creativity, building crude thoughts into concrete concepts and coherent frameworks.
Incidentally, this is also how I write poetry. I don’t have a destination in mind; I write the first line, then the second, and let the words guide me wherever they may. Of course, in poetry and prose, there are editorial steps as well, but those merely optimize the core of the idea as it stands when first composed.
In truth, the artificial time pressure made room for a workflow more suited to how I naturally write. I’m not in a place to judge the end product, of course. I leave that as an exercise for you, dear reader.
I can’t say whether or not my (small) audience did or didn’t enjoy my posts. I can, however, comment on their more objective elements.
One easily quantifiable measure is the ratio of technical to non-technical posts. We can knock out the first, introductory post, as well as this one, leaving us with 6 posts to analyze.
This Is Your Brain on Ruby was decidedly technical. Diversify Your Learning and How to Give a Great Tech Conference Talk were decidedly not. The remaining 3 posts focused on the area where I’m most comfortable and (I think) most effective, namely the human side of technology.
Comparing this to the past, through mid-2015 I only wrote technical posts, then I abruptly released 5 straight completely non-technical posts. In that light, this exercise was a recalibration for me, centering me between the purely human and purely technical.
Another significant outcome for me was the ability to flesh out some back-burner thoughts into ideas that I may now use to submit to conference CFPs. Some of the stuff I wrote about was never fully developed, and having this obligation helped me to realize that there was significant depth where I hadn’t perceived any in the past.
One final note is in order. I’ll readily admit to being an attention seeker, and this exercise brought out the worst in me in that regard. I tweeted daily about the experiment, and watched my Twitter notifications and Google analytics to see if anyone noticed.
No one did. (Well, except for this really nice comment from Peter Cooper.) And honestly, it kind of hurt.
Yes, I know it’s a really busy time for the world. People are with family, or taking vacations, or whatever. Somehow the logic part of my brain resented it anyway.
So it became an opportunity to exercise discipline, and accept that I write all this not because other people read it, but because it’s worthwhile in itself. Expressing my ideas, and staying accountable by doing so in public, allows me to form a more concrete perception of who I am, and why I do what I do. It also allows me to turn on myself with a critial eye, and figure out whether I’m fooling myself into doing things that counter my self-interest or harm others.
It’s not an easy lesson to learn, but maybe it was more valuable to me to have my blog ignored than it would have been to have people reading and talking about it.
Wowee. It’s been a long week.
I started the challenge with 2 blog posts already written, 1 good idea, and the knowledge that the final post would be a “lessons learned” exercise. That left 6 posts to write, 4 of which needed ideas. This is actually a pretty tall order for someone like me, who tries to keep the content high-quality and valuable to others.
Coming up with post ideas mostly happened while walking around, or otherwise going about my daily business. There was some dedicated brainstorming time, but that wasn’t too significant.
The real time sink was the writing process itself. Writing and editing posts could easily take 3-4 hours apiece, more if I needed to add some custom JavaScript (as I did for the censoring functionality here to make blog readers, and myself, more comfortable). Those 3-4 hours needed to be highly focused, and I would walk away feeling drained and needing a break.
It helped to have a lot of vacation time, and even working hours were pretty relaxed since lots of people take vacation in the last week of December. That meant I had time even during work hours to be cranking out posts, though mostly they were done during personal time.
Also not helpful: My 18-month-old got sick this week, sicker than she’s ever been. Seeing your child with a temperature of 40℃ (that’s 104℉ for you imperialist Americans) is terrifying, and practically it forces you to drop everything. The worst of it lasted about 2 days, during which time I didn’t get much done in general.
I had hoped to stay ahead of the curve throughout the 8 days, always at least one post ahead, but reality hit hard, and I ran out of headway by Friday. I had to crank out one post on Friday morning, and another on Saturday night. (Anything computer-related is off-limits to me Friday evening through Saturday evening, since that’s the Jewish Sabbath.) So I made it, I guess, but it was down to the wire and quite stressful.
Even when I was ahead of the curve, I found myself staying up hours later than usual just to get things done in time. This made it much more difficult for me to function throughout the day.
If I do something like this again, I realize I can’t push myself this hard. It’s just not worth it. Releasing a post a day could be fun in the future, but I’d want to have all, or nearly all, the posts ready well in advance.
In the final calculus, I think it was a worthwhile experiment. I paid a heavy price in terms of stress, but it helped me to think of blogging as part of what I normally do, and to find a more balanced voice with regard to the content of my posts.
I hope this experiment will inspire me to keep growing as a writer, to keep developing interesting ideas, and to share them with you, my readers.
Whenever you read this post, whether it be a day or a decade after I wrote it, I hope you find its contents, and the products of this week’s efforts, valuable and interesting. And I hope this week turns out to be the beginning of a very productive 2017!
Written as part of the 2016 8 Crazy Blog Posts Challenge.
]]>But once people are actually at a conference, especially a larger conference, the number of things to do and to absorb can be overwhelming. There are talks, maybe even multiple talks at once, there’s the hallway track, vendor booths, and of course the great big world outside where people escape when they want a break. Everyone competes for attention, offering excitement, swag, fun hacking activities, and/or the alluring prospect of job opportunities.
Then there’s the new practice of posting talks online. This opens up talks to new audiences who couldn’t make it, which is fantastic. It also means that if the community loves a talk, it could be see by thousands more people than were at the conference! The downside is that conference attendees may be more likely to skip talks because they’re all online later anyway, meaning that speakers lose the opportunity to build off the speaker-audience interaction.
What’s a speaker to do?
I have to admit something upfront. This post isn’t really about how to give a great conference talk per se. It’s about how to give a conference talk I’m going to want to watch. And as a conference-goer, I want to watch a talk that tells a coherent, interesting story.
The good news is that, according to science, the other attendees think the same way. Stories impact the brain in a truly wild fashion.
Of course, much of this makes sense if we reflect on our own lives. When you watch a movie, you worry about what will happen to the characters, even though you know they’re not real. When a character is maimed or killed, you feel pain, even though no one has hurt you.
When you read dry statistics about suffering, it probably doesn’t motivate you to action. When you see a video with a tearful person describing some horror they suffered—well, that’s another matter entirely. This is why charities seeking donations rely mostly on monologue videos, stories, and even handwritten thank-you letters, rather than facts and figures. Stories motivate people to act.
Think about how many religions don’t have founding myths. Can you think of any? Me neither. People don’t dedicate their lives to ideals, noble though that may sound. A compelling story, though… now we’re talking.
You want to give an interesting talk. You want the conference attendees to come, and you want the online crowd to hear out the whole talk, not leave the video after 2 minutes.
You want your conference talk to make an impact. You want people to remember it and talk about it. You want people to learn from it and apply your ideas in their own lives.
Tell a story.
No blog post about stories would be complete without a few personal stories, right? Well, get ready, because here they come!
My first conference talk was at RailsConf 2016, where I spoke about Rails engines. This was the abstract:
Want to split up your Rails app into pieces but not sure where to begin? Wish you could share controller code among microservices but don’t know how? Do you work on lots of projects and have boilerplate Rails code repeated in each?
Rails Engines may be your answer.
By building a simple Rails engine together, we will better understand how this app-within-an-app architecture can help you write more modular code, which can be gemified and reused across multiple projects.
It doesn’t appear to be a story at first glance, but read it again. In case you missed it, here’s the story:
Once upon a time, you had a messy app that was suffering for lack of modularity. Or, you kept writing the same boilerplate multiple times.
Then, a conference talk showed you how to build Rails engines, and you were able to take control and conquer your mess. You were awesomer and your apps were healthier.
The end.
One of the best stories you can offer people is that you will make their lives happier and better. (Unfortunately this tendency can be abused by ill-intentioned people out to make a buck.) In this case, though, I made a simple value proposition which I truly believed in, and the result was that every chair in the workshop had a butt in it, and 130 people walked out having built their first Rails engine.
I’d like to focus for a moment on a feature of the Rails engines talk which stood out dramatically for me.
In the introductory portion to my workshop, I had to convey that Rails can route a request to any Rack app. This point is critical to understanding Rails engines, because they take advantage of this feature of Rails to tie into Rails apps. I wanted my audience to build an accurate mental model of how Rails engines are integrated under the hood, and this was the critical point for them to understand.
I could have thrown that information on a slide, said a few words about it, and moved on. But information conveyed that way is easily missed, and I didn’t want to lose people through confusion about this point.
So I told a story.
I told my audience I would build, before their eyes, a perfectly functional
Rails application without using models, views, or controllers. I created a new,
blank Rails application, and added an empty lambda to my routes.rb
file.
Then, step by step, I let the errors guide me to building out that lambda into a
minimal Rack application serving a response to a single endpoint.
Could I have just pre-built a Rails app containing a Rack app? Yes, of course. But building up the tension through live code, and letting errors drive the coding, let the audience discover with me the ability to hook Rack applications into Rails apps.
After the talk (as well as in practice runs before), this was the part that generated the most positive feedback. It wasn’t all that difficult to do, but the experience of watching code develop incrementally was riveting for the audience. It has all the parts of classic storytelling: Facing a seemingly impossible challenge, a hero must discover heretofore unknown abilities, and then is able to surmount the obstacle. In this case, the impossible challenge was building a Rails app without controllers, and the unknown ability was using a barebones Rack app.
That talk unfortunately wasn’t recorded, but I can give a favorite example: the late Jim Weirich’s legendary talk on the Y combinator. The experience of watching a master develop an idea through code is enough to send chills down my spine.
Of course, live coding isn’t for everyone. But if you present code in your talk, I’d strongly recommend starting with the problem the code has to solve, then building out the solution incrementally. Don’t just show what your code does; talk about the obstacles it has to overcome, and the process of getting there.
My most recent talk was given at WindyCityRails 2016 then repeated (and
improved!) at RubyConf 2016. The approach here was different. The subject of
the talk was improvements to Ruby’s OpenStruct
library, but I also came in
with a lot of personal experience. I had dealt with OpenStruct
-related
performance problems in the past, which let me to publishing 2 alternative
solutions.
In the first half of the talk, I established the importance of OpenStruct
to
the Ruby ecosystem, and the reasons for its performance problems. Then I made
it personal, by talking about how these performance issues affected my company’s
projects.
With the tension established, I spoke about 4 different approaches to the problem, framing it in terms of my own experience with each solution, and how they succeeded or failed in solving the problems I was having.
Since 2 of the approaches were libraries I had personally written, I was able to convey a sense of personal triumph in the degree of success they achieved. And when I concluded my talk with some lessons learned, they weren’t just logically derivable ideas; they were thing I had personally learned through my experience.
I was originally worried that the very personal nature of the talk would hurt its reception. Then something amazing happened.
WindyCityRails asks participants for written feedback after the conference, then collects and summarizes the responses for the speakers as an exercise in reflection and self-improvement. There were many really nice, general comments, but 2 specific responses stuck with me:
I enjoyed the narrative, “I tried, I failed, I tried again” aspect of this.
Best presentation of the conference. Connected with me.
It turns out, people really like hearing a good story. I didn’t just tell them about an abstract problem and some ways to handle it. (We’ve all heard lots of talks which do exactly that!) I told them the story of my problem, and how I used tools like benchmarking, profiling, and reading code to find a solution that worked for me. I told my story in a way that conveyed to the audience that they, too, can use these tools to solve problems they face.
As a speaker, you may be afraid to make your talk too personal. Some of the best talks I’ve watched mostly or entirely follow a narrative or set of highly personal narratives. And they’re all the more engaging for it.
You don’t have to choose just one method or one story. Even within a story, you can use a deeper level of story to great effect.
In my OpenStruct
talk, for example, I had to explain how
OpenStruct
—a hash-like data store—and its alternatives work. It’s
generally notoriously difficult to explain code and algorithms in a short span
of time.
To counteract the problem, I kept things as concrete and story-like as possible.
I introduced the code one piece at a time, following the trajectory of a single
key-value pair inserted into the OpenStruct
(or alternative) instance. By
following a linear narrative, I was able to convey the information to the
audience quickly and effectively, without them realizing how much data I was
streaming into their brains.
The same principle is at play when someone spends a few minutes of a talk live coding, or telling a quick personal anecdote to drive the point home. By incorporating short stories (or story-like experiences like live coding) into the framework of a talk, you can make your presentation more engaging, connect with your audience, and help them walk away having gained the maximum from your presentation.
One really great example is this talk about effective feedback, where (just after the 20-minute mark) the speaker uses a personal anecdote to illustrate both ineffective and effective feedback. The story drives home the ideas conveyed earlier in the entire talk.
Great talks aren’t made in a day, or even a few days. It can take years to write a great talk. The best talks aren’t from people who woke up with bright ideas; they come from people who experienced something over months and years, and share their experiences in a personal, intellectually and emotionally captivating fashion.
If you want to give an amazing tech talk, don’t start by making slides or sketching out bullet points. Go out and do something. Experiment, try a new way of working, build an OSS library or contribute to one. Get involved in your local tech community, be a mentor at work, volunteer in your free time. Do interesting things, have experiences, and then think long and hard about them, learn what you can, and synthesize your learning—along with the story of what taught you those lessons—into the next great talk.
People like to hear from experts. You are the world expert on your life, your emotions, your projects, your history, and your growth. And you’ll be surprised by how much people are interested in learning about that.
To give a great conference talk, take the time to build a story worth telling. And then share it with the world. But don’t just bring the facts and figures and details to the stage.
Bring yourself. Be yourself. Share yourself.
Written as part of the 2016 8 Crazy Blog Posts Challenge.
]]>for each desired change, make the change easy (warning: this may be hard), then make the easy change
— Kent Beck (@KentBeck) September 25, 2012
The immediate context of the quote is changing code. But truth be told, it actually applies to a whole host of problems on multiple levels. It can help us fundamentally alter our practices, our teams, and all elements of the quality of our software.
Let’s understand how and why.
Let’s begin with a metaphor from the world of chemistry. Any chemical reaction is bound by activation energy, the minimum amount of energy which must be available in the environment for it to proceed. Even if the result is a lower-energy, higher-entropy configuration (a fancy way to say the universe will be happier after the reaction), the activation barrier must be surmounted in order for the reaction to proceed.
A catalyst is nothing more or less than an entity which lowers the activation barrier. Here’s a diagram of how a catalyst works:
The right side represents the higher-energy (i.e., not preferred) state, while the left side is the lower-energy (stabler) state. To cross from one to the other, there must be enough energy available to cross the peak in the center. A catalyst lowers the activation barrier such that we only need enough energy to conquer the smaller, red-dotted peak.
Also note that the activation barrier looks different from the two sides. From the right side, it’s decently high, unless a catalyst is present. From the left, stabler side, it’s really tall even with the catalyst there. This is why the form on the left side is likely to stick around.
Now let’s apply all this to programming practices. The right represents a bad, or less-than-optimal practice, you currently employ. The left represents a better way of doing things. It’s stabler – it will yield better software that will make you happier.
If you want to transition from the worse practice to the better one, you have 2 issues to consider:
By lowering the barrier, and maximizing the value—or lowering the cost—of a good practice, you’ll help yourself and others adopt the right practice and stick with it. Let’s give some examples of how this works on a number of levels.
Whatever shell you use, you get a configuration file with the opportunity to define aliases and functions. Rather than giving general principles, I’ll point to a few concrete things I’ve done to encourage myself to do the right thing.
Sometimes a simple alias is enough. One of my favorite aliases is
1
|
|
Short and sweet. Instead of typing out git commit
, I just type 2 characters
and I’m ready to go. But that comes with a distinct advantage. The -v
flag
shows a diff of the commit, so I have all the changes fresh in my mind as I
write a commit message, and end up writing a clearer message. Sometimes I’ll
see the diff and rethink my decision to commit the current set of changes; it’s
a chance to give one last audit to the commit contents themselves.
Here’s another great git alias:
1
|
|
Instead of adding whole files, you can use -p
to visually inspect every change
and only include changes to certain lines in one commit, saving other changes
for later. By aliasing the -p
version of git add
, I’ve made the -p
option
more attractive than typing out git add filename
, or even worse, git add .
.
I have caught a significant number of mistakes (often leaving in a binding.pry
call!) this way.
Here’s another. Ever been tempted to git push --force
? OK, that was a funny
joke. Not only have you been tempted, you’ve probably done it in the past hour,
despite knowing it’s fraught with danger.
There’s a better way, though: --force-with-lease
. It works like --force
,
but backs out if your changes would overwrite someone else’s changes. Once
--force-with-lease
exists, there’s not usually a good reason to push --force
except for the fact that it’s so much less typing.
Problem solved!
1
|
|
Bash shortcuts lower the activation energy by reducing the friction involved in remembering and adding particular flags and options. They also make the better practice stabler by lowering the long-term cost of sticking with it.
One line I’ll frequently write into my terminal won’t make any sense to you:
1
|
|
This line will update a branch with the most current version of the master
branch on GitHub, run a build locally, and if everything still goes green,
initiate a pull request.
gup
is (in my brain) short for “git update”, and is aliased to:
1
|
|
That pulls the latest master
from GitHub into my local master
branch, then
rebases the current branch off of master. The goal is to ensure that I don’t
submit a pull request before making sure I’m working with the most updated
version of the code, in case recent changes conflict with something I changed.
Next is safely
. This is a quick’n’dirty shell function:
1 2 3 |
|
Hmmm, those don’t really make sense either, because they also break down into shortcuts:
1 2 3 4 5 |
|
Let’s take it from the top. We run our apps as sets of containers, which need
to be run with docker-compose
. We run the web
container to run our tests,
and the actual command is bundle exec rspec
followed by whatever arguments we
passed drspec
. ("$@"
just passes on the arguments passed to our function.
So if we ran drspec spec/models/user_spec.rb --fail-fast
, that would equate to
docker-compose run web bundle exec rspec spec/models/user_spec.rb --fail-fast
.)
Next is cop
, which launches a container to run Rubocop with a bunch of options.
Going up a level, safely
just runs those 2, then executes whatever you passed
it, short-circuiting if any of the checks fail (hence the name safely
).
Finally, we have prom
:
1 2 3 4 5 6 7 8 |
|
prom
uses GitHub’s hub
tool to open a pull request
against the master
branch.
Ultimately, here’s what I wrote:
1
|
|
And here’s what happened:
1 2 3 4 5 6 |
|
No way am I going to type all that every time! It’s sorely tempting to just open a pull request and hope for the best, but that wastes CI resources, and if I broke something badly, I just wasted other people’s time. By setting up a few bash shortcuts, I make it easy to set the process in motion, go off and do something else, and come back in a few minutes to make sure it completed properly.
It’s also worth noting that drspec
is pretty valuable on its own. It’s not so
much for the time saved, but more because if testing requires a lot of typing,
it may not happen nearly as frequently as it should. Some use guard
to solve
this problem by running tests automatically without typing at all.
If you ask people why they don’t run their tests more frequently, 9 times out of 10 they’ll say it’s because it takes too long. They want to run their tests more regularly throughout the development process, but the overhead of a single run is too much of a break in development to apparently justify the time sink.
Many things work the same way. We want to change our ways, but the cost of doing things the better way is too high. We can often lower the cost by making things faster. This isn’t the place to talk about speeding up tests (that’s worth a post in itself), but I’ll give a simpler example.
As mentioned previously, I work with a containerized app. We use
bundle package
to package the gems in the Gemfile into the app directory, then
copy them over to the container image amongst everything else in the Rails root.
This means that any gem updates require rebuilding the container image.
Building the image used to take about 10 minutes, certainly enough time to
totally distract you from whatever you were doing. This, in turn, meant we were
unlikely to upgrade gems as we go, because the subsequent container rebuild was
a time-consuming process.
I recently dug in to figure out what was taking so long. It turns out that we
copy every file and then change the ownership from the root user to the
application’s user. Copying followed by chown
ing was taking the vast majority
of time. I discovered that not only were we copying app code, we also copied
over every file in the .git
directory! Since git produces a lot of files, we
had an ever-increasing amount of chown
ing to do. I learned that I can exclude
.git
from the copy operation using a .dockerignore
file, and that brought
down build time from 10 minutes to 3.
3 minutes is still a lot of time, but it’s low enough that upgrading gems is less of a hassle. And lowering that activation barrier means we’re more likely to keep our gems current.
As I stated in the intro, there are many types of problem you can solve by thinking about activation barriers. Here’s an example of solving a people problem in this way.
On my team, we believe strongly in the value of pair programming, but we often have a difficult time actually making it happen. We’re a distributed team with a chat-heavy culture, and even when pairing seems to be a good idea, it often fails to materialize.
Recently we made some changes to the way we work, and I pushed hard to have a daily standup on video chat. One major motivation was to open up the option of pairing. If we already see each other face to face, there’s a much lower activation barrier to saying, “Hey, this is a difficult task. Can you help me out with it?” Or perhaps more importantly, “Oh, I didn’t know you were working on that. Maybe I can pair with you on it? I want to learn more.”
Another example of this type of thinking is having regular retrospectives. Everyone wants to improve the team, but feedback loops don’t tend to open themselves up. It’s critical to create opportunities for team members to give feedback without having to demand to be heard. (The same applies to 1-on-1 employee-manager meetings, too.) To lower the barrier even more, you can take steps to make sure people feel safe giving honest feedback, such as using a tool that anonymizes the identity of the person giving feedback.
I’m not going to talk about this too much, since I already developed this idea at length in another post. You can constrain yourself to follow certain practices, train yourself to become more comfortable with them, and in the long run adopt the practices happily. Here, rather than focusing on the activation barrier, you accept that it will be difficult to change. But the commitment means you increase the value and decrease the cost such that it’s eventually more difficult to go back.
Your current constraints ultimately train you to operate effectively within precisely those constraints. Choose your constraints wisely.
— Ariel Caplan (@amcaplan) November 27, 2015
Always make time for testing, pairing, knowledge sharing, and learning. Eventually they will happen automatically without special effort.
— Ariel Caplan (@amcaplan) November 27, 2015
Some of the best practices actually cost very little, once you become comfortable with them. When it comes to TDD, for example, I struggled a lot at the beginning, but now I develop more quickly in a TDD workflow than when I try to test afterwards. It took practice and discipline to get here, but at this point I’d have to work hard to go back, and that’s exactly the point.
Figure out what major change you want to make to your practices, and commit to it wholeheartedly. Think about how to adopt the practice with the least possible disturbance to your workflow, and reevaluate as you go. Eventually you should find that your standard workflow has come to encompass this practice, at little to no extra cost.
Any self-aware programmer or team tries to be conscious of its shortcomings and work on improving them. Often we make a mistake in trying just to push ourselves to do better. Instead, we might be better off focusing on lowering barriers to improvement. If it’s just as easy to do the right thing, we’ll find ourselves doing it more and more, and improvement will happen without adding unnecessary stress.
We are often encouraged to be catalysts for change. It’s worth remembering what a catalyst is. A catalyst doesn’t push or prod or apply pressure. Instead, it lowers a barrier and enables change to happen.
The same is true for changing our teams. You can’t force others to do things your way. But you can serve as a catalyst, removing obstacles while helping them to understand the value your suggested change will provide. If the change is legitimately valuable and the team is open-minded, usually they’ll come around.
Written as part of the 2016 8 Crazy Blog Posts Challenge.
]]>There’s a wealth of information I can access, and more is being created constantly than I could ever hope to consume. Prioritization is key. What should take precedence in my learning? What will contribute most to my professional development as a programmer?
There are many approaches, ranging from “Follow your heart and specialize,” all the way to “Learn a bit of everything.” Some emphasize learning a full stack of technologies, others think you should just get really good with the tools you use every day.
My approach falls somewhere in the middle, and I’d like to share it with you. It starts with a bit of wisdom imparted over 1,600 years ago.
The Talmud records a profound suggestion which bears a surprising degree of relevance to programming:
Rabbi Safra stated in the name of Rabbi Yehoshua ben Chananya… A man should divide his learning [literally “years”] into three: one-third Mikra, one-third Mishnah, one-third Talmud. (Kiddushin 30a)
Rabbi Safra describes a method of learning where one studies multiple degrees of abstraction simultaneously.
At the highest level, one studies Mikra, the 24 books of the Bible. These books are light on detail, but establish the ethics and certain core laws which form the foundation of Jewish life.
One layer deeper, one studies Mishnah, the 60 tractates which extensively detail the multitude of laws governing every moment of private and public life for the committed Jew.
At the lowest level of abstraction, one studies Talmud, the probing analyses and debates which provide the rationale for the concrete laws. (This isn’t to be confused with what is today called Talmud, which constitutes a canonized subset of the Talmud referred to by Rabbi Safra. Naming things is hard.) One cannot effectively function at this level of low-level details all the time; it would overload the mind. But understanding the rationale for the laws helps in terms of understanding and handling edge cases, as well as situations that prior legal decisions have yet to address.
In contrast to many other religions, Judaism is extremely concerned with practice. Jews (at least Orthodox Jews, whose practice most closely mirrors the Talmud’s vision of Judaism) don’t generally describe the faithful as “believers”; instead, the usual term is “observant,” meaning they observe at least a major subset of mandated religious practices.
With that sort of attitude, it’s clearly Mishnah which describes how Jews engage with their religion on a daily basis. For daily guidance, nothing could be more critical than the Law itself.
So why not suffice with study of Mishnah? Apparently Rabbi Safra saw great value in reaching up and down one layer of abstraction. Reaching up, Mikra provides inspiration, motivation, and guiding principles. Reaching down, Talmud provides a depth of understanding which gives its own form of meaning to practice. Both are necessary in building a holistic religious personality.
In programming, we tend to operate mainly at a single layer of abstraction. As a result, when we think about learning, we often equate our own layer with the sum total of relevant knowledge, then learn more about it. In doing so, we miss the opportunity to make ourselves more effective professionals, and to broaden ourselves as people.
As programmers, we’re called upon to solve human problems through technology. Often we focus so much on the technology that we forget our goal is to solve problems for real people.
Rabbi Safra reminds us: Study Mikra. Don’t get bogged down in languages and frameworks without understanding why you write code in the first place. Figure out what motivates people, and how to communicate more effectively. Study how to design your products in a way that matches human intuition. Don’t simply hack on an app; build a whole product that actively meets the needs of your users and creates value for them.
If you program in a high-level language (as I do), it’s easy to forget about all the magic which has to happen on a lower level just to get started. There’s a whole beautiful world of hardware, machine language, operating systems, systems programming, and highly efficient algorithms. Most of it won’t matter in day-to-day programming. But on the rare occasions when it does matter, this knowledge will allow you to provide enormous value to your team, or contribute meaningfully to the programming community through Open Source projects. You might even submit a patch to your favorite high-level language!
As Rabbi Safra instructs us: Study Talmud. Don’t take the tools you see at face value; tear them apart, learn how they work, and figure out what you can do to make them better.
As a personal example, I’m currently working through 3 different books at the same time:
Certainly, some people will do better learning one thing only at a time. YMMV. Personally, I’ve found that having several tracks of study allows me to jump back and forth based on interest and energy, and I progress faster this way than if I had only one subject to study at a time.
One significant outcome of study at all three levels has been a significant broadening of perspective. I’ve become less afraid of low-level programming or hardware concepts, and my ability to think about business value and UX has improved, all while building skills relevant to my primary daily work activities.
Learning isn’t just about broadening our horizons. It’s also a tool in service of career progression. Why not just specialize at one level, and get really good at doing one thing?
I actually wouldn’t discourage that at all. Specialization is healthy, normal, and probably unavoidable. There’s just too much to know to really know everything!
Still, having a window into the layers above and below your specialty will make you better at doing what you’re actually paid to do. No one is paid to think only at one level of abstraction; your job is to make your code work for the next level up through understanding the next level down.
You don’t have to learn everything. You don’t have time to learn everything. Just make sure to always be expanding your horizons both within and beyond your primary level of abstraction.
Written as part of the 2016 8 Crazy Blog Posts Challenge.
]]>1
|
|
Go ahead, paste it into irb
and check that it works. I’ll wait here.
Wait, how does that work? Welcome to the wonderful world of BrainRuby. Check out the explanatory video, or read on.
LANGUAGE WARNING! Unlike my usual style, this post uses some particularly salty language. Not by my choice, but because these are technical terms. I’ve gone ahead and censored them for you. Feel free to disable, though:
In 1993, Urban Müller created a minimalistic, Turing-complete language called Brainf***. The language is written as a contiguous String using 8 characters, each of which indicates a command to the interpreter. Here’s a Hello World program:
1
|
|
Obviously, this isn’t intended for production code. Instead, it pushes the boundaries of what’s possible to do in programming language design. How can we do the most with the least?
Unsurprisingly, many derivatives of Brainf*** exist, each pushing the boundaries (or adding a few commands) in its own way.
I was never all that interested in esoteric languages, but I was intrigued by a Brainf***-like “language”: JSF***, a 6-character JavaScript programming style which correlates 1-to-1 with any JavaScript code you might write normally.
Here’s a sample Hello World program:
1
|
|
Run it in your browser console and see what happens!
I began to wonder: Could this be done for Ruby? So I set out to create a Ruby programming style that would follow 4 rules:
JSF*** uses a simple strategy:
It’s possible to do that in JavaScript because any JavaScript String can be
created using the 6 allowed characters (this file should
give you some idea of how it’s done), and you can eval
any code using:
1
|
|
For BrainRuby, both steps were fairly challenging. We’ll start by building up
a String, and then address how to eval
it. Ultimately, BrainRuby needs just
these 10 characters to work:
$#<>{}/+"`
All it takes to build a String in Ruby is 5 characters: "<+/$
. Let’s take
this one step at a time.
You can create an empty String in Ruby using ""
.
If you shovel a number into a String, Ruby converts the number into the character with that code, then adds it to the end of the String. For example:
1
2
3
4
'a'.ord
#=> 97
"" << 97
#=> "a"
We can repeat this as many times as we want:
1
2
"" << 112 << 117 << 116 << 115
#=> "puts"
Finally, we can replace a number with 1+1+1+1...
so along with
stripping out unnecessary whitespace, that previous code sample becomes:
1
2
""<<1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1<<1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1<<1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1<<1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1
#=> "puts"
Now we’re down to just a single numeric character. How do we eliminate it?
Ruby has a number of “magic” global variables. One is `$$`, which stores the
PID of the currently running Ruby process. That doesn’t do much for us, since
it will be different every time. However, thanks to math, `$$/$$` will equal 1
every time. Replacing every 1 with `$$/$$`:
1
2
""<<$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$<<$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$<<$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$<<$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$+$$/$$
#=> "puts"
Now that we can write any program, we just have to eval
our String. How can
we do that?
I don’t think Ruby has any built-in way of accessing eval
without using alpha
characters, so that option is out. However, a few clever hacks will get us
there.
We can use backticks to execute a new program on the command line:
1
2
`echo "Hello"`
#=> "Hello\n"
Backticks also support interpolation:
1
2
3
str = '"Hello"'
`echo #{str}`
#=> "Hello\n"
So we can interpolate the program we’ve created within backticks. We’re almost there!
The command line will expect shell scripts, not Ruby programs. But the
ruby executable has a -e
flag which lets you include your Ruby
program in your invocation of Ruby:
1
2
$ ruby -e "puts 'Hello World'"
Hello World
So for the program puts 'Hello World'
, we assemble the String
ruby -e "puts 'Hello World'"
, and feed that into our backticks.
Since the String is dynamically generated as described in the previous section,
we’ll have to use interpolation to make that happen.
We’re not out of the woods yet! The program will execute just fine, but
the backticks method returns a String containing the output rather than actually
printing it out. To solve this, we’ll need to preface our program with
$><<
. $>
is another “magic” global variable,
representing standard out. <<
is just the usual shovel operator.
Taken as a whole, $><<
is the equivalent of puts
. So
Ruby will run our program, then output the results to standard out!
Taken in sum, we can insert our program into this ERB template:
1
|
|
and everything should run as expected!
This is exactly how the BrainRuby generator works. Check it out on GitHub and play around with it!
BrainRuby has one noteworthy flaw: Since everything is run in a separate process, a BrainRuby file can’t require and use something from another BrainRuby file. Maybe you’ll figure out how to solve that problem!
BrainRuby won’t be directly useful in your daily development work (I hope!). But it’s an interesting experiment that taught me a lot about some lesser-known features of the Ruby language. And of course, it was a joy to play with it, tweak it, and finally see it working!
I got a real emotional rush from my work on BrainRuby. I hope you’ve enjoyed sharing that experience and seeing what’s under the hood. Please let me know in the comments if you learned something from what I shared; I certainly hope you did.
Written as part of the 2016 8 Crazy Blog Posts Challenge.
]]>Also, you probably fail a lot. And you’re not alone! Most teams fail miserably at the task of documentation upkeep. It reaches the point where you have to wonder:
Most of the material you’ll find centers around practices that will help the team prioritize documentation, organize it better, etc. I think that’s a load of hooey (pardon my French). Documentation is really hard because we haven’t figured out how to automatically check that it’s accurate, and people can’t reasonably be expected to keep it all in our heads.
Until now.
Swagger, also known as OpenAPI, is a nifty tool to help you write the docs for RESTful APIs. It ultimately boils down to a JSON endpoint in your app that spits out a standardized description of how your app works. This endpoint is completely independent of the language or framework you use in your app.
I’ll mention at the bottom how you might go about incorporating Swagger into a Ruby app, but first I want to convince you to use it! So…
The really cool thing about Swagger isn’t the rules, but the power that comes with following them. Standards allow us to build powerful shared tools, and Swagger is a shining example.
Once you’ve assembled your standardized docs, you can use Swagger Codegen to spit out generated client libraries for over 40 different languages. (Warning: The code will be about as good as you’d expect from generated code. Sometimes that’s good enough!) Perhaps more practically, you can plug your docs into Swagger UI, which interprets your docs into a friendly, human-readable format. Significantly, Swagger UI allows you to fill in the expected query/data params and submit an HTTP request, leading to a far happier experience for whoever has to actually use your API. You can even generate OAuth tokens right from the web interface! Check out a sample documentation UI to see how much you get for free by following the Swagger standard. You can get Swagger UI bundled as a Docker image if you’re into that. (I am!)
But we haven’t even hit the coolest thing about Swagger, which is:
There are a host of OSS libraries around Swagger for nearly any language used in modern web development. I’ll focus on Ruby, but a quick perusal of the documentation shows that similar tools exists for JavaScript, Java, Elixir, PHP, and Python.
I’ll specifically discuss Apivore, though it’s not the only Ruby solution. Bad news for MiniTest fans, though: All the current tools for Ruby, including Apivore, are RSpec-only.
After including the gem, you’ll write the basic layout of your Apivore suite:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
You feed it the endpoint that serves your documentation, then write specs for every endpoint in your documentation. Finally, you specify that all paths have been tested. This last spec is really important, because otherwise you might forget and leave out a path! It’s also nice as a way to test-drive writing specs for all the routes, since the failure message tells you exactly which paths and response codes have yet to be tested:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Now that you’ve got a failing test, let’s see how to write an Apivore spec.
Remember that the subject
in every spec is the Apivore::SwaggerChecker
instance for your documentation endpoint. This is important because it keeps
track of validated routes, so it can verify at the end that all routes were
validated.
Here’s what a sample spec might look like:
1 2 3 4 5 6 |
|
Here, params
refers to the dynamic pieces of the requested path, in this case
just the id
of the requested post. The params
hash may also include
information intended for the headers, query string, or data body of the request.
These are specified by special keys, as follows:
1 2 3 4 5 6 7 8 9 10 11 |
|
When you validate a path, Apivore will check that the status code and format of the response exactly match your Swagger specification, including required keys and data types. Any deviance is noted in a failure message with a helpful diff.
This may seem like a lot of work. But you know what’s a lot more work? Dealing with annoyed customers and clients who find the API doesn’t work as expected.
Let me share a personal experience. I added Apivore to our app a while ago, thinking it was a neat idea. I thought it would take me just a couple of days to get everything in order and build out the test suite.
Wow, was I wrong. It took a full month.
That’s not a matter of how fast I code, but rather because writing the documentation and testing it this way uncovered a large degree of variance between our documentation and what the API actually provided. This, in turn, was often rooted in differing fundamental assumptions about how things should work. Cleaning up all that mess took weeks! And I’m proud to say with confidence that we don’t have a mess like that anymore, because our “docspecs” (as I call them fondly) ensure that our docs are always up to date.
You’re probably messing up as much as we were. The scary part is, you don’t know where or how, and even a full manual audit wouldn’t prevent it from happening again.
Rather than driving ourselves crazy keeping code and documentation in sync, why not leverage our documentation to help us write better code?
Some will argue that this approach is backwards. Isn’t the code the main thing? Why do we want to maintain documentation plus specs around it? There are tools to derive documentation from our specs or from making API requests, so why not just use those?
I argue that that approach is actually backwards. Our documentation should exactly detail the service we provide to clients and consumers. Our code is merely the implementation of that service. So it makes sense for the documentation to be the canonical reference, while the code is tested to ensure it falls in line with the documentation.
Another benefit is that working in this way allows Documentation-Driven Development, where you make a change to the docs, then let the failing test drive you to implement the change or new feature. This leads to much cleaner design, focused directly on the ultimate value you provide your API clients. I’ve found this practice also dramatically speeds up new development on the project.
There are a few gotchas with Apivore, so let me be upfront and help you make the most of your docspec experience:
Apivore doesn’t test query parameters. Sorry. I’ve filed a GitHub Issue complaining about it, but so far no dice. I think it would be even more useful if it did validate query parameters, but I find it pretty awesome even without that feature.
Apivore specs need to run using RSpec’s defined
order,
meaning they’ll run from top to bottom every time. This exposes you to false
success, because you won’t detect order-dependent failures. You can get around
this by running all the endpoint specs within an RSpec context
that
uses order: random
, so just the last spec will always go last, but
everything else will be randomized.
let
statements for all Apivore specs to help keep
individual specs clean:
1 2 3 4 5 6 7 8 9 10 11 |
|
You’ll find that your Apivore specs quickly become way too big for one file.
I’ve found RSpec’s shared examples work quite well. First, for semantics, I
aliased it_behaves_like
to it_serves_up
, so my
endpoint specs look like:
1 2 3 4 5 |
|
For organization purposed, I define the shared examples in a
spec/requests/api
directory, and make sure they have names that
don’t end in _spec.rb
. Finally, I require all those examples
before running my specs:
1 2 3 4 |
|
Now I just defined shared examples in those directories.
With all these special modifications, here’s a sample spec/requests/api_spec.rb
for a Rails app:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
with actual specs defined in shared examples elsewhere.
Luckily this isn’t too complicated, and there’s plenty of Googlable help. You
have 2 basic approaches: Either integrate deeply with your programming language,
or just edit the JSON directly (or edit as YAML). In the case of Ruby,
swagger-blocks
seems to be a
popular solution, and we’ve found it useful. It’s pretty low-level, though, and
there are other solutions which might work better as higher-level constructs
depending on which framework you use. The Swagger site maintains
a useful list of language-specific tools.
I’ve seen another team just use the Swagger editor to edit their specification, and it works well for them.
There isn’t a right answer here; it all depends on whether you prefer the docs to live closer to or further from your code.
There is a learning curve to understand how to use Swagger, but the tooling is fantastic, which helps a lot. I’d recommend looking at a sample specification to get a feel for it, then edit to match your own API.
Working with Swagger has changed how I think about the API I work on every day. I often used to fall into the trap of thinking we’re building it to build it, and the documentation is “just for the users.”
Following a Documentation-Driven Development path with Swagger and Apivore, I’ve found that the user is brought to the forefront. Everything we build is in service of the product, as described in the Swagger specification, and our docspec suite ensures we don’t let our users down.
Part of the reason teams have trouble with documentation is that the users are relegated to an afterthought. It’s difficult to develop empathy for them when their mental model of the app is likely so far removed from our own.
By enforcing accurate documentation, we ensure that we’ve specified a full explanation of what the user can expect from our API. Since we’re also responsible for maintaining that explanation, it becomes a tool to change us, helping us maintain a user-centric design approach. No longer do we build features and then expose them to the user; instead, we start with the allowed requests, then build the implementation beneath the surface. Starting the development process from the user’s vantage point leads to cleaner APIs, a better user experience, and ultimately happier customers.
Written as part of the 2016 8 Crazy Blog Posts Challenge.
]]>git commit -am "
… what exactly? Filling in
that line can be really tricky, and you never know when another developer—or
future you—will curse your name for an unhelpful commit message.
Fortunately, many common harmful practices can be summed up into a few anti-patterns. In this post, we’ll cover 5 critical mistakes to avoid.
One of the most common mistakes programmers make (not just junior developers!)
is overuse of the -m
flag. It’s awfully convenient to write out your message
on the command line, never having to drop into Vim to edit a commit message.
Unfortunately, -m
also means you can’t (easily) write a multi-line commit.
Often, a multi-line commit is the perfect place to add a comment about why a
decision was made, the business purpose of a feature, or how something performs
(you can even include benchmarks!). When commits are viewed in the short form,
only the first line will show up, but if someone dives deeper into that commit,
they’ll find all the juicy stuff you left for them. And if you make multi-line
commits a regular practice, you’ll find that the team starts looking for them
more and more, further increasing their value.
If you don’t like using Vim, guess what? You don’t have to! Just set the
$GIT_EDITOR
bash variable in your .bash_profile
and you can switch it to
any editor you want. I’m partial to MacVim, so I’ve set:
1
|
|
to start MacVim in Insert mode. You can add whatever command line flags you wish to really customize your git editor.
I’ve also aliased gc
to git commit -v
, which prints out a diff in my text
editor below the message area. It’s not included in the message, just for me to
see while I’m writing. This way, I have a quick opportunity to look over all my
changes and make sure my message properly reflects what changed in this commit.
Commits are often headlined with Update file.rb and other_file.js
. This
misses the point of a commit.
If I want to know what files were updated in a commit, I’ll dive deeper with
git show
. The commit tagline serves a different purpose: explaining the
semantic nature of your changes.
Consider this git history:
1 2 3 4 5 |
|
Now consider this:
1 2 3 4 5 |
|
Which one tells a more coherent story, months or years later? And keep in mind, this is just for a blog with a bunch of unrelated posts; now think about an application which has a nontrivial history of interrelated commits.
Making the point differently, the file list tells the How, but your commit history is about telling the What: What happened to this repo over the course of time? How has it changed and developed?
Very often we justify a quick “Bugfix” commit message with the thought that it’s just a bugfix so it’s not important. That could not be further from the truth!
A bug is no more or less than an application doing exactly what you told it to do. The problem is always that you told it to do something different than you really had in mind. Fixing a bug is a change in behavior; it deserves to be documented appropriately in your commit message.
What was the incorrect behavior you observed? How does your change address it? What steps did you take to ensure the bug won’t happen again: Extra tests, a guard clause, a refactor to avoid the problem? All of this is useful information when you need to revisit that code.
This tip is simple: Keep it short!
It’s definitely important to go into detail in your commit messages. But the one-line summary isn’t the place for it. Make sure your first line is no more or less detailed than necessary, and then expand to your heart’s content in the following lines.
Tim Pope recommends that you keep the first line below 50 characters. I stretch that limit on occasion, but it’s a decent rule of thumb.
Keep in mind, when you run git log
, you’ll be reading the messages on your
screen in a big wall of text. Make sure the important words pop out (capitalize
appropriately!) and don’t create more visual noise than necessary. As
Shakespeare wrote, “Brevity is the soul of wit.”
To get to the point: You have 1 line to work with, so get to the point!
This one might be a little controversial, but hear me out.
Some shops might have a convention of prefacing a commit message with a ticket number:
1
|
|
This might seem like a good idea. However, keep in mind that it adds significant noise to the commit message and removes focus from the substance of the commit, all while impinging on your precious 50 characters.
More importantly, the ticket number is helpful for searching, but not for eyeballing. The one-liner’s main goal should be to quickly run through history and figure out what to focus on. Once you spot the commit you want, you can dive into details. At that point, information like ticket number is useful—and that’s why you have the remainder of your commit message.
My personal preference is to always include the ticket number in the branch name and pull request title, and to always merge the pull request with a merge commit. That way, the commit messages are broken into chunks, bracketed by pull request titles which sum up the last few commits and link them to a ticket. So instead of:
1 2 3 4 5 6 |
|
we might see this instead:
1 2 3 4 5 6 7 8 |
|
In this case, I can clearly see which set of commits corresponds to which pull request, which then links a set of several commits with a ticket as a unit of work done. YMMV, but I find this to be an incredibly helpful way of figuring out how individual commits fit into a sequence without compromising on the limited first-line space.
Of course, to make this work, you probably want to ensure your pull requests are rebased off your main branch just before merging. Otherwise, your commits end up in a big jumble and it’s harder to make sense of things. Regardless of whether you follow my suggestion in terms of ticket numbers, I consider it a best practice to make sure related commits are grouped linearly in your Git history. It will save you a lot of confusion in the long run.
This might seem like a lot of nitpicking for not a lot of value. In truth, I can’t guarantee immediate results because there won’t be any. It takes time, and the cooperation of a full team, to make the most of good Git commit practices. I can, however, attest to these practices having saved me countless hours in figuring out what happened in the past, why decisions were made, and even just the basics of which code additions and changes are interrelated.
I will close with one thought: Whatever your decisions, you only get one chance1 to write history. Make it count.
Written as part of the 2016 8 Crazy Blog Posts Challenge.
1With Git, technically you can rewrite history whenever you want, but of course practically it doesn’t happen past a few commits.
]]>I’m certainly proud of (most of) the content. It’s shown significant development in complexity and depth since I started. But I have a lot of ideas running around my head all the time, some of them even good ideas, and I’d like the blog to reflect more of them.
So I’ve decided to give myself a Chanukah gift: I’m requiring myself to make every effort to come up with one blog post every day of Chanukah.
This probably makes you wonder a few things.
It’s really simple. I want to make my blog better, and I think it’ll be better if I’m more inclined to post things when I have ideas which are useful to others. I’ve gotten out of the habit of blogging, with the result that lots of smaller, useful-tech-tips type blog posts, and some larger thinkpiece posts, just never happened. That means people Googling to find a solution to their problem might not find what they’re looking for, and I’ve lost the amazing conversations and learning that come from publishing thinkpieces.
Also, I really like writing.
To change my habits, I’m going to try to publish 8 posts in 8 days. That’s a big deal for me; it’ll add 50% to my current total of posts. But desperate times call for desperate measures.
The 25th of Kislev through the 3rd of Tevet, which this year happens to coincide with December 25th through January 1st. Those dates aren’t related in any significant way; in 2013, for example, the first Day of Chanukah coincided with American Thanksgiving.
As I’ll clarify below, Chanukah wasn’t chosen for any religious reason; it’s just a convenient (for me) cluster of 8 days to focus a rather ambitious goal!
Honestly, I’m not quite sure. I’m cheating by counting this as one post, and I have one high-quality post already written up. I also have an idea for a third. Beyond that, I’ll have to get creative!
Now’s when I’m supposed to offer some platitude about “This is my Chanukah gift to you,” or “You can think of it as a Christmas present,” or whatever. But I don’t feel comfortable with just about anything explicitly interfaith, and I don’t think I need a religious (or pseudo-religious) reason to do something valuable. Also I don’t believe in Chanukah presents, coming from a family that eschewed the practice since TBQH it seems pretty Christian. So basically, no presents. Deal.
To be perfectly honest, this is an experiment to see whether I can get my creative juices flowing. I happen to have a little extra time now, between days off and the fact that people seem to disappear for well-deserved vacation around this time of year so essentially nothing work-related gets done.
In other words, it’s the best time of year to engage in an experiment that takes a lot of time, for someone like myself who isn’t celebrating anything in particular, unless you count Chanukah, which adds maybe 20 minutes to my daily schedule but basically functions as 8 regular days.
It should be fun. There’s some good stuff down the pipeline!
If you’re first visiting the blog, I do have an About Page. I have lots of thoughts about stuff, and occasionally speak at meetups and conferences. You know, programmer stuff?
Update!!! These are the posts I’ve written as part of this challenge:
]]>Here are the slides:
OpenStruct, part of Ruby’s standard library, is prized for its beautiful API. It provides dynamic data objects with automatically generated getters and setters. Unfortunately, OpenStruct also carries a hefty performance penalty.
Luckily, Rubyists have recently improved OpenStruct performance and provided some alternatives. We’ll study their approaches, learning to take advantage of the tools in our ecosystem while advancing the state our community.
Sometimes, we can have our cake and eat it too. But it takes creativity, hard work, and willingness to question why things are the way they are.
Here’s the official video:
And the slides:
OpenStruct, part of Ruby’s standard library, is prized for its beautiful API. It provides dynamic data objects with automatically generated getters and setters. Unfortunately, OpenStruct also carries a hefty performance penalty.
Recently, Rubyists have tried various approaches to speed up OpenStruct or provide alternatives. We will study these attempts, learning how to take advantage of the tools in our ecosystem while advancing the state of the Ruby community.
Sometimes, we can have our cake and eat it too. But it takes creativity, hard work, and willingness to question why things are the way they are.
Want to split up your Rails app into pieces but not sure where to begin? Wish you could share controller code among microservices but don’t know how? Do you work on lots of projects and have boilerplate Rails code repeated in each?
Rails Engines may be your answer.
By building a simple Rails engine together, we will better understand how this app-within-an-app architecture can help you write more modular code, which can be gemified and reused across multiple projects.
]]>