A little Ruby program to monitor Solr DIH imports

July 11, 2012 Open source, Programming No comments , , ,

Solr is a text indexing package. All interaction with it is through GETting and POSTting to the service, and then XML responses.

After you do the GET to start an import with Solr’s DataImportHandler, you have to check a status URL, and Solr gives a response like this:

    <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
    <lst name="initArgs">
        <lst name="defaults">
            <str name="config">jdbc.xml</str>
    <str name="command">status</str>
    <str name="status">busy</str>
    <str name="importResponse">A command is still running...</str>
    <lst name="statusMessages">
        <str name="Time Elapsed">0:0:4.545</str>
        <str name="Total Requests made to DataSource">1</str>
        <str name="Total Rows Fetched">36262</str>
        <str name="Total Documents Processed">36261</str>
        <str name="Total Documents Skipped">0</str>
        <str name="Full Dump Started">2012-07-11 09:31:03</str>
    <str name="WARNING">This response format is experimental.  It is likely to change in the future.</str>

And then after a while when you check the status URL, the response looks like this:

    <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
    <lst name="initArgs">
        <lst name="defaults">
            <str name="config">jdbc.xml</str>
    <str name="command">status</str>
    <str name="status">idle</str>
    <str name="importResponse"/>
    <lst name="statusMessages">
        <str name="Total Requests made to DataSource">1</str>
        <str name="Total Rows Fetched">1000000</str>
        <str name="Total Documents Skipped">0</str>
        <str name="Full Dump Started">2012-07-11 09:23:30</str>
        <str name="">Indexing completed. Added/Updated: 1000000 documents. Deleted 0 documents.</str>
        <str name="Committed">2012-07-11 09:26:01</str>
        <str name="Total Documents Processed">1000000</str>
        <str name="Time taken">0:2:31.95</str>
    <str name="WARNING">This response format is experimental.  It is likely to change in the future.</str>

But when does it finish? There’s no way to tell other than hitting that status URL and watching for it to change. I needed a tool to tell me when importing had finished, so I could use it in my makefile. It just has to check the status until it’s completed, and then exit.

So, I wrote a little program to do the monitoring, using Ruby and the Nokogiri library. Nokogiri is a web client similar to Perl’s WWW::Mechanize, with built-in XPath and CSS selector capabilities.


require 'rubygems'
require 'nokogiri'
require 'open-uri'

while true
    doc = Nokogiri::XML( open( 'http://hostname:8080/solr/db/dih?command=status' ) )

    # If it's still running, this status will say something like "A process is still running..."
    # The status turns blank when the process has stopped.
    status = doc.xpath( '//response/str[@name="importResponse"]' ).inner_text
    if ( status == '' )

    # Get the import process's elapsed time and record count and display then
    time_elapsed   = doc.xpath( '//response/lst[@name = "statusMessages"]/str[@name = "Time Elapsed"]' ).inner_text
    docs_processed = doc.xpath( '//response/lst[@name = "statusMessages"]/str[@name = "Total Documents Processed"]' ).inner_text
    puts docs_processed + ' documents in ' + time_elapsed + ' seconds'


I’m not much of a Ruby guy, but this was pretty simple to write. Most of my time was looking at Nokogiri’s method listings and reacquainting myself with XPath syntax. The one Ruby gotcha I found was that before Ruby 1.9, if your program uses any Ruby gems, you have to put require 'rubygems' before you require any other gems.

SELECT * is a bug waiting to happen

July 10, 2012 Programming No comments , ,

A SQL SELECT statement that use * instead of an explicit column list is a bug waiting to happen.  Beyond the quick-and-dirty prototyping stage, every SQL query in an application should explicitly specify the columns it needs to protect against future changes.

Say you’ve got a table and code like this:

USERS table:
id integer NOT NULL
name varchar(100) NOT NULL
mail varchar(100)

my $query = perform_select( 'select * from users' );
while ( my $row = $query->fetch_next ) {
    if ( defined($row{mail}) ) {
        # do something to send user mail

Later on, someone goes and renames the users.mail column to users.email. Your program will never know it. The email branch will just never execute.

Here’s another example. Say you’ve got that users table joining to departments, like so

users table:
id integer NOT NULL
name varchar(100) NOT NULL
email varchar(100)
deptid integer

dept table:
id integer NOT NULL
deptname varchar(100) NOT NULL

FROM users u JOIN dept d ON (u.deptid = d.id)

So your selects come back with id, name, email, deptid, id, deptname. You’ve got “id” in there twice. How does your DB layer handle that situation? Which “id” column takes precedence? That’s not something I want to have to spend brain cycles thinking about.

You should even specify which table each columns come from. For example, say you don’t want the IDs and you just specify the columns you want. So you write something like this:

SELECT name, email, deptname
FROM users u JOIN dept d ON (u.deptid = d.id)

Later on, someone adds an email column to the dept table. Now, your “SELECT name, email, deptname” is making an ambiguous column reference to “email”. If you specify everything fully:

SELECT u.name, u.email, d.deptname
FROM users u JOIN dept d ON (u.deptid = d.id)

then you’re future-proof.

Of course, this rule doesn’t apply to code that is dealing with columns in aggregate. If you’re writing a utility that deals with all columns in a row and transforms them somehow as a group, then no, you don’t need to specify columns.

Aside from the potential bugs, I also think it’s important to be clear to the human reader of your code what exactly you’re pulling from the database. SELECT * makes it a guessing game. Which of these makes it more obvious to the reader what I’m doing?

SELECT * FROM users;


SELECT first_name, last_name, email_addr FROM users;

There are also all sorts of speed reasons to specify columns. You reduce the amount of work fetching data from the disk, and your DBMS may not even have to fetch rows from disk if the data is covered in an index. For discussion of the performance issues, see this StackOverflow thread. One thing to remember: Your code will never be slower if you specify columns. It can ONLY be faster.

The speedups are secondary, however. I want to write my queries to be resistant to future change. I don’t mind making a few extra keystrokes to make that happen. That’s why I always specify columns in my SELECTs.

My YAPC::NA 2012 recap

June 19, 2012 Open source 1 comment , , , ,

Random notes and comments about YAPC::NA in Madison, WI

ack 2.0

I uploaded ack 2.00alpha01 to the CPAN.

All that week, Rob Hoelz did a ton of work, and Jerry Gay was invaluable in helping us work through some configuration issues. Then, out of nowhere, Ryan Olson swoops in to close some sticky issues in the GitHub queue. I love conferences for bringing people together to get things done.

Finally, on Thursday night at the Bad Movie BOF I hacked away on the final few tickets while watching “Computer Beach Party (1987)”. Halfway through MST3K’s take on “Catalina Caper (1967)”, I made the alpha release. If that’s not heaven, I don’t know what is.


Glen Hinkle

Mojolicous looks really cool. Glen called it a “full web framework, not partial,” although I’m not sure what would count as a partial framework.

It has no outside dependencies, and works to have a lot of bleeding edge features like websockets, non-blocking events, IPv6 and concurrent requests.

Mojo::UserAgent is the client that is part of Mojolicious, and it’s got all sorts of cool features:

  • DOM parsing
  • text selection via CSS selectors
    • For example, “give me all the text that is #introduction ul li.”
    • Command line: mojo get mojolicio.us '#introduction ul li'
  • JSON parsing
  • JSON pointers
    • JSON pointers look like XPath as a way of specifying data in
      a JSON string

Mojolicious is based on “routes”, which look like:

get '/'
get '/:placeholder'
get '/#releaxed'
get '/*wildcard'

The latter three are (apparently) ways of making flexible URL specifications that then return information to your app about the URL.

Sample app with Mojolicious::Lite:

use Mojolicious::Lite;
get '/' => sub {
    my $self = shift;
    $self->render( text => 'mytemplate' );

@@ mytemplate.html.ep

Mojolicious also has its own templating language that looks a lot like Mason, but Glen said you can use Template Toolkit as well (and presumably others, but TT was the only one I was
interested in.)

Full Mojolicious includes a dev server called Morbo and you can run your apps through the Hypnotoad “hot-code-reloading production server” if you don’t want to run under Apache/etc.

Another selling point for Mojolicious: They value making things “beautiful” and “fun”. Glen specifically said “Join our IRC channel. We will not be mean to you.”

Perl-as-a-Service shootout

Mark Allen


This was disappointing because I was hoping for recommendations to use or not use a given vendor’s offerings. I was hoping at least for “This vendor does this, and that one does that differently,” but all I came away with was “they’re pretty much the same.”

It’s a good sign that, as Mark put it, “getting PSGI-compliant apps into PaaS is generally pain free.”

His criteria were as follows:

  • Ease of deployment
  • Performance (ignored)
  • Cost (ignored)
  • How “magical” the Perl support is (first class or hacked together)

Why ignore performance and cost? I don’t know.

Big data and PDL

There were three sessions back-to-back about PDL, the Perl Data Language. It’s in the same space as Mathematica and R. I was disappointed because I was hoping for big data analysis outside of just number crunching. The analysis of galaxy luminosity was pretty and looked very easy to do, but it didn’t have any application I was interested in. I bailed after the 2nd talk.

My big takeaway from the talk was that I need to take a statistics

Web security 101

Michael Peters gave a good intro talk on security, handwaving the tech details with examples of “This is how bad guys can get your info.”

Emphasis on not trusting your client data, but I was surprised and disappointed that he seemed to steer people away from Perl’s taint mode. He made vague reference to there being bugs with regexes and taint mode, but I don’t know what he’s referring to.

Taint mode is one of my favorite things about Perl 5, and there are (last I checked) no plans for implementing it in Perl 6. 🙁

One of the examples Michael used for an example of an attack with SQL injection used sleep() to let the attacker find out information about the database based on timings. I asked him to write that up for bobby-tables.com.

On being a polyglot

Miyagawa gave a great overview of how he spends time in Perl, Python and Ruby, and what he learns from each, and what each language learns from the others.

Key point: Ruby is not the enemy. They are neighbors.

Things he likes about Ruby:

  • Everything is an object
  • More Perlish than Python
  • Diversity matters = TIMTOTWTDI
  • Meta programming built in and encouraged
  • Convention of ! and ? in method names
    • str.upcase! to upcase str in place
    • str.islower? to functions that return values
  • Ability to omit self
  • Everything is an expression.
  • No need to type ; (unlike Python)
  • Implicit better than explicit
  • block, iterators and yield
  • No semicolons, 2-space indent.
    • (This last one gives me the creeps. 2-space indent!??!)

Naming differences between the three:

  • Perl naming: Descriptive, boring, clones become ::Simple
  • Python naming: Descriptive, confusing, everything is py* or *py
  • Ruby naming: Fancy, creative, chaotic (Sinatra, Rails, etc)
  • With frameworks, all the languages get creative: Django, nbottle,
    Catalyst, Dancer, Mojolicious

When you’re going to borrow something from another language, don’t just borrow it, but copy it wholesale. Example: Perl’s WWW::Mechanize getting cloned as Ruby’s WWW::Mechanize.

Doing Things Wrong, chromatic

chromatic talked about the value of doing things “wrong” and embracing your constraints. Sometimes you can’t do The Perfect Job, and that’s OK, and sometimes comes out even better.

Example: chromatic wanted to do some parallel web fetching. He could have dug into LWP::Parallel, but instead he went with what he knew: waitpid() and shelling to curl.

Screen scraping example:

Parsing HTML with regex may be the “wrong” way to do
it, but sometimes, it’s the best solution.

Perl 6 lists

Patrick Michaud talked about all kinds of awesome stuff you can do with lists and arrays in Perl 6. After a bit I stopped trying to take notes and follow what he was saying and instead just let it wash over me so I could absorb the coolness.

I would really like Perl 6 to be easy enough to install for serious play. I need to get my feet back into the Perl 6 pool and see how I can help.

Tweakers Anonymous

John Anderson (genehack)

Quick overview of cool things that he has in his configs.

  • “The F keys are not just to skip tracks in your music player.”
  • Keep your configs in git. You will screw them up. This will save you.
  • Make your editor chmod +x when you create a .pl file since you know you will want to run it.

The coolest thing was this plugin called flymake. Apparently it runs continuously, submitting your code to a compiler (or perl -c) as you type. As soon as John made a typo on a line and moved to the next line, the error line was highlighted. He then demonstrated doing this with Perl::Critic, which must be dog slow, but flymake lets you adjust the frequency of checks.

Exceptional Exceptions

Mark Fowler, now at OmniTI. Great discussion of exceptions in Perl.

Returning false on failure sucks because you have to follow your failures all the way up the call tree. It’s tedious and error-prone because all it takes is one link in the chain to not propagate the error and you’re out of luck.

Using try/catch from Java.

There are three non-deprecated ways of doing exceptions in Perl.


eval is often confused with eval $string which means to compile code. eval is a statement not a block so requires a semicolon after it. It works but it’s a pain.


  • Simple extension to the syntax
  • Uses $_ not $@


  • Has named exception variables
  • Fully functional syntax
  • Very fast and featureful
  • Large dependency base

TryCatch is a little faster than Try::Tiny, but eval is much much faster than either of them.

TryCatch has much more clever syntax, but looks (to me) to be more dangerous.

Mark recommends that whatever you use, you make exceptions out of Exception::Class objects.

Self-selecting for the thick-skinned means turning away contributors.

May 29, 2012 Open source 2 comments , , ,

Every so often, usually in the middle of an online argument or flame war, someone will say that the climate of the group has him or her uncomfortable. He’ll say something like “I don’t want to be around all this hostility” or, worst of all, “This makes me not want to get involved.” The reply sometimes comes back “You’re just thin-skinned.”

Labeling someone as “thin-skinned” makes no sense. There is no measure of skin thickness. When someone says “You are thin-skinned,” he’s really saying “You are less willing to put up with anti-social behavior than I am.”

I wonder what the speaker hopes for “You’re just thin-skinned” to do. Is that supposed to inspire the listener? Make him realize the error of his ways? I don’t know what the intent is, but it communicates “You are wrong to feel that way” and that’s hurtful, not helpful. There’s nothing wrong with not wanting to put up with anti-social behavior.

None of this is an endorsement of being easily offended, however you may define “easily.” I wish we all had the attitude of Gina Trapani, who once said “I eat your sexist comments for breakfast. YUM.” But not everyone does, and that’s no reason to shut them out. Yes, online communities can get hostile, but that doesn’t mean we need to tacitly endorse that hostility. We can do better, and we should, to help our communities grow and thrive.

Aside from ignoring the aspect of treating other humans with compassion, it makes no sense to ignore or insult those you see as thin-skinned. Ricardo Signes recalled a lightning talk at OSCON 2011 where someone noted “When we say that this community requires a thick skin, it means we’re self-selecting for only people with thick skin.”

Self-selecting for the thick-skinned means turning away contributors. If you were running a restaurant, and a customer said “I like the food here, but my waiter was rude to me,” the sensible restaurateur would take this as an opportunity for improvement. You’d thank the patron for bringing it to your attention. You wouldn’t say “Well, that’s just the way it is here” or “You’re just too sensitive.” The wise restaurateur would see it as an opportunity for improvement.

There’s an adage in business that for every customer complaint you get, there are between ten to 100 other dissatisfied customers that don’t say anything and go somewhere else. This is especially so in the case of those tarred as “thin-skinned” by someone in the community. For every person who speaks up and says “I don’t like this hostility”, how many more unsubscribe from the list, leave the IRC channel or vow not to come back to the user group meeting again, all without saying a word about it?

In online communities, we’re not dealing with an owner-customer relationship, but nonetheless contributors to the community are a scarce commodity. A business owner can’t afford to turn away customers. Is your online community or open source project so flush with talent that you can turn away contributors?

My Solr+Tomcat troubles, and how I fixed them

May 22, 2012 Open source, Programming 6 comments , ,

I’ve been working at getting Solr working under Tomcat, and spent most of a day working on fixing these problems. The fixes didn’t take so much time as the trying to grok the Java app ecosystem.

My Solr install worked well. I was able to import records and search them through the interface. Where I ran into trouble was with the Velocity search browser that comes with Solr.

I’m documenting my troubles and their solutions here because otherwise they won’t exist on the web for people to find. Putting solutions to problems on the web makes them findable for the next poor guy who has the same problem. I figure that if I spend a day working on fixing problems, I can spend another hour publishing them so others can benefit.

These are for Solr 3.5 running under Tomcat 6.0.24.

Unable to open velocity.log

Velocity tries to create a file velocity.log and gets a permission failure.

HTTP Status 500 - org.apache.velocity.exception.VelocityException:
Failed to initialize an instance of
org.apache.velocity.runtime.log.Log4JLogChute with the current
runtime configuration. java.lang.RuntimeException:
org.apache.velocity.exception.VelocityException: Failed to initialize
an instance of org.apache.velocity.runtime.log.Log4JLogChute with
the current runtime configuration. at
Caused by: java.io.FileNotFoundException: velocity.log
(Permission denied) at java.io.FileOutputStream.openAppend(Native
Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:207)

But where is it trying to create the file? What directory? Since
no pathname was specified, it seemed that the file would be created
in the current working directory of Tomcat. What would that be?

First I had to figure out what process that Tomcat was running as:

frisbee:~ $ ps aux | grep tomcat
tomcat     498  0.6  1.3 6240056 214880 ?      Sl   09:27   0:10 /usr/lib/jvm/java/bin/java ....

In this case, it’s PID 498. So we go to the /proc/498 directory and see what’s in there.

frisbee:~ $ cd /proc/498
frisbee:/proc/498 $ ls -al
ls: cannot read symbolic link cwd: Permission denied
ls: cannot read symbolic link root: Permission denied
ls: cannot read symbolic link exe: Permission denied
total 0
dr-xr-xr-x   7 tomcat tomcat 0 May 22 09:27 ./
dr-xr-xr-x 173 root   root   0 May 17 11:33 ../
dr-xr-xr-x   2 tomcat tomcat 0 May 22 09:58 attr/
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 autogroup
-r--------   1 tomcat tomcat 0 May 22 09:58 auxv
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cgroup
--w-------   1 tomcat tomcat 0 May 22 09:58 clear_refs
-r--r--r--   1 tomcat tomcat 0 May 22 09:56 cmdline
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 coredump_filter
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cpuset
lrwxrwxrwx   1 tomcat tomcat 0 May 22 09:58 cwd

We can see that cwd is a symlink to a directory, but we have to be root to see what the target directory is. I have to run ls again as root.

frisbee:/proc/498 $ sudo ls -al
[sudo] password for alester:
total 0
dr-xr-xr-x   7 tomcat tomcat 0 May 22 09:27 .
dr-xr-xr-x 174 root   root   0 May 17 11:33 ..
dr-xr-xr-x   2 tomcat tomcat 0 May 22 09:58 attr
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 autogroup
-r--------   1 tomcat tomcat 0 May 22 09:58 auxv
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cgroup
--w-------   1 tomcat tomcat 0 May 22 09:58 clear_refs
-r--r--r--   1 tomcat tomcat 0 May 22 09:56 cmdline
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 coredump_filter
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cpuset
lrwxrwxrwx   1 tomcat tomcat 0 May 22 09:58 cwd -> /usr/share/tomcat6

I could also have used the stat command.

frisbee:/proc/498 $ sudo stat cwd
File: `cwd' -> `/usr/share/tomcat6'
Size: 0               Blocks: 0          IO Block: 1024   symbolic link
Device: 3h/3d   Inode: 100017      Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (   91/  tomcat)   Gid: (   91/  tomcat)
Access: 2012-05-22 09:58:17.131009458 -0500
Modify: 2012-05-22 09:58:17.130009715 -0500
Change: 2012-05-22 09:58:17.130009715 -0500

So we find that the CWD is /usr/share/tomcat6. I don’t want the tomcat user to have rights to that directory, so instead I create a velocity.log file in a proper log directory and then symlink
to it.

frisbee:/proc/498 $ cd /var/log/tomcat6
frisbee:/var/log/tomcat6 $ sudo touch velocity.log
frisbee:/var/log/tomcat6 $ sudo chown tomcat:tomcat velocity.log
frisbee:/var/log/tomcat6 $ cd /usr/share/tomcat6
frisbee:/usr/share/tomcat6 $ sudo ln -s /var/log/tomcat6/velocity.log velocity.log

Now the app is able to open /usr/share/tomcat6/velocity.log without error.

log4j error

Once I created a log file Velocity could write to, it stared throwing an error with log4j. log4j is the Java logging package.

org.apache.log4j.Logger.setAdditivity(Z)V java.lang.NoSuchMethodError:
org.apache.log4j.Logger.setAdditivity(Z)V at
org.apache.velocity.runtime.log.Log4JLogChute.initAppender(Log4JLogChute.java:126) at
org.apache.velocity.runtime.log.Log4JLogChute.init(Log4JLogChute.java:85) at
org.apache.velocity.runtime.log.LogManager.createLogChute(LogManager.java:157) at
org.apache.velocity.runtime.log.LogManager.updateLog(LogManager.java:255) at
org.apache.velocity.runtime.RuntimeInstance.initializeLog(RuntimeInstance.java:795) at
org.apache.velocity.runtime.RuntimeInstance.init(RuntimeInstance.java:250) at
org.apache.velocity.app.VelocityEngine.init(VelocityEngine.java:107) at
org.apache.solr.response.VelocityResponseWriter.getEngine(VelocityResponseWriter.java:132) at
org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:40) at
org.apache.solr.core.SolrCore$LazyQueryResponseWriterWrapper.write(SolrCore.java:1774) at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:352) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:273) at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555) at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857) at
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:679)

In searching the web for this error, I found this ticket in the Solr bug tracker that says that the log4j .jar files should be removed from the Solr tarball, because they can conflict with existing .jars on the system. That conflict was exactly the error I was getting.

I wanted to remove the extra .jar files, so I used locate to search my system for any log4j .jars. Indeed, there was one installed with solr:

frisbee:~ $ locate log4j

So I just changed the extension of the file so it wouldn’t get loaded as a .jar.

frisbee:~ $ sudo mv /var/lib/tomcat6/webapps/solr/WEB-INF/lib/log4j-over-slf4j-1.6.1.{jar,jarx}

Now Velocity loads beautifully. Now the real work starts: Configuration of Velocity to understand the schema in my Solr core.

I hope this helps someone in the future!

Rethink the post-interview thank you note

May 15, 2012 Interviews 3 comments , ,

Good golly do people get riled up by the idea of sending a thank you note after a job interview. “Why should I thank them, they didn’t give me a gift!” is a common refrain in /r/jobs.  “They should be thanking me!”

I think the big problem is the name, “thank you note.”  It makes us recall being forced to say nice things about the horrible sweater Aunt Margaret gave us for Christmas.

It’s not a thank you note. It’s a followup. It doesn’t have to be any more than this:

Dear Mr. Manager,

Thank you for the opportunity to meet with you today. I enjoyed the interview and tour and discussing your database administration needs. Based on our discussions with Peter Programmer, I’m sure that my PostgreSQL database administration skills would be a valuable addition to the Yoyodyne team. I look forward to hearing from you.

Susan Candidate.

There’s nothing odious there. You’re not fawning or begging. You’re thanking the interviewer for his time, reminding him of key parts of the interview and your key skills, and reasserting that you are interested in the job. (And before you say “Of course I’m interested, I went to the interview!”, know that perceived indifference and/or lack of enthusiasm is an interview killer.)

People ask “Do I really have to do that?” and I say “No, you don’t HAVE to, you GET to.” It’s not a chore, it’s an opportunity.

Please help me with terminology for “small acts that add to a greater whole”

May 11, 2012 Open source 14 comments , ,

I’m looking for a term to describe small positive actions that individuals do to add up to a greater whole.

Examples in the world of open source software might include:

  • Answering a question on a mailing list
  • Testing a beta release
  • Welcoming someone to a community
  • Submitting a bug report, or clarifying an existing one
  • Patching a bug
  • Closing a ticket
  • Removing dead code
  • Silencing a compiler warning
  • Adding a test to the test suite
  • Blogging about how you use a software package
  • Thanking others on the project
  • Patching the documentation
  • Adding a tutorial example to the docs
  • Adding notes to the README
  • Hosting or speaking at a user group meeting
  • Attending a user group meeting

Outside of software development specifically, the best example is making an edit to a Wikipedia page. Wikipedia is nothing but millions of these small actions, aggregated.

The term “microaggression” was coined to describe a small non-physical interaction between people that communicates hostility towards others.  I’m looking for the opposite.

The Japanese term “kaizen” means “improvement”, or “change for the better”, and is close to what I’m talking about, but I’m looking for a term for the actions, not the process.

If there’s not a similar term to describe the small positive actions that create a greater whole, I’m going to coin it.

Ideas? References? Existing terms I haven’t thought of?  Please post them below.

Before you write a patch, write an email

April 27, 2012 Open source 7 comments , , ,

I often get surprise patches in my projects from people I’ve never heard from.  I’m not talking about things like fixing typos, or fixing a bug in the bug tracker.  I’m talking about new features, handed over fully-formed. Unfortunately, it’s sometimes the case that the patch doesn’t fit the project, or where the project is going.  I feel bad turning down these changes, but it’s what I have to do.

Sometimes it feels like they’re trying to do their best to make the patch a surprise, sort of like working hard to buy your mom an awesome birthday present without her knowing about it. But in the case of contributing to a project, surprise isn’t a good thing. Talking to the project first doesn’t take away from the value of what you’re trying to do. This talking up front may even turn up a better way to do what you want.

There’s nothing wrong with collaborating with others to plan work to be done. In our day-to-day jobs, when management, clients and users push us to start construction of a project before requirements are complete, it’s called WISCY, or Why Isn’t Someone Coding Yet? As programmers, it’s our job to push back against this tendency to avoid wasted work. Sometimes this means pushing back against users, and sometimes it means pushing back against ourselves.

I’m not suggesting that would-be contributors go through some sort of annoying process, filling out online forms to justify their wants.  I’m just talking about a simple email. I know that we want to get to the fun part of coding, but it makes sense to spend a few minutes to drop a quick note: “Hey, I love project Foo, and I was thinking about adding a switch to do X.”  You’re sure to get back a “Sounds great! Love to have it!” or a “No, thanks, we’ve thought about that and decided not to do that”.  Maybe you’ll find that what you’re suggesting is already done and ready for the next release. Or maybe you’ll get no reply to your email at all, which tells you your work will probably be ignored anyway.

I’m not suggesting that you shouldn’t modify code for your own purposes.  That’s part of the beauty of using open source. If you need to add a feature for yourself, go ahead. But if your goal is to contribute to the project as well as scratching your own itch, it only makes sense to start with communication.

Communication starts with understanding how the project works. The docs probably include something about the development process the project uses. While you’re at it, join the project’s mailing list and read the last few dozen messages in the archive.  I can’t tell you how many times I’ve answered a question or patch from someone when I’ve said the same thing to someone else a week earlier.

Next time you have an idea to contribute a change to an open source project, let someone know what you’re thinking first. Find out if your patch is something the project wants. Find out what the preferred process for submitting changes is. Save yourself from wasted time.

We want your collaboration! We want you your help! Just talk to us first.

What if news stories were written like resumes?

April 20, 2012 Job hunting, Resumes No comments

If news stories were written like the resumes I see every day, a news story about a fire might look like this:

“There was a fire on Tuesday in a building. Traffic was backed up some distance for some period of time. Costs of the damage were estimated. There may have been fatalities and injuries, or maybe not.”

Now look at your resume. Does it have bullet items like “Wrote web apps in Ruby”? That’s just about as barely informative as my hypothetical news story above. However, your resume’s job is to get you an interview by providing compelling details in your work history.

Add details! What sort of web apps? What did they do? Did they drive company revenue? How many users used them? How big were these apps?

Or maybe you have a bullet point of “provided help desk support.” How many users did you support? How many incidents per day/week? What sorts of problems? Were they geographically close, or remote? What OSes did you support? What apps? Was there sort of service level agreement you had to hit?

If you don’t provide these details, the reader is left to make her own assumptions. “Help desk support” might mean something as basically as handling two phone calls a day for basic “I can’t get the Google to work” questions. Without details you provide, that’s the picture the reader is free to infer.

When you write about your work experiences, you have a picture in your head of the history and skills you’re talking about. To you, “wrote web apps in Ruby” or “provided help desk support” brings back the memory of what that entailed. The reader doesn’t have access to your memory. That’s why you have a resume with written words. You have to spell it out, to draw that picture for her. Your details make that happen and increase the chances you’ll get an interview.

Programmers, please take five minutes to provide some data for an experiment

April 19, 2012 Programming, Unix 30 comments

Whenever people talk about ack, there’s always a discussion of whether ack is faster than grep, and how much faster, and people provide data points that show “I searched this tree with find+grep in 8.3 seconds, and it took ack 11.5 seconds”. Thing is, that doesn’t take into account the amount of time it takes to type the command.

How much faster is it to type an ack command line vs. a find+xargs line? I wanted to time myself.

Inspired by this tweet by @climagic, I wanted to find out for myself. I used time read to see how long it would take me to type three different command lines.

The three command lines are:
A: ack --perl foo
B: find . -name '*.php' | xargs grep foo
C: find . -name '*.pl' -o -name '*.pm' | xargs grep foo

So I tried it out using time read. Note that it’s not actually executing the command, but measuring how long it takes to hit Enter.

$ time read
find . -name '*.pl' -o -name '*.pm' | xargs grep foo

real    0m8.648s
user    0m0.000s
sys     0m0.000s

For me, my timings came out to average about 1.4s for A, 6.1s for B and 8.6s for C. That was with practice. I also found that it is nearly impossible for me to type the punctuation-heavy B and C lines without making typos and having to correct them.

So I ask of you, dear readers, would you please try this little experiment yourself, and then post your results in the comments? Just give me numbers for A, B and C and then also include the name of your favorite Beatle so I know you actually read this. Also, if you have any insights as to why you think your results came out the way they did, please let me know.

At this point I’m just collecting data. It’s imperfect, but I’m OK with that.

  • Yes, I’m sure there’s another way I could do this timing. It might even be “better”, for some values of “better”.
  • Yes, I know that I’m asking people to report their own data and there may be observational bias.
  • Yes, I know I’m excluding Windows users from my sample.
  • Yes, I know it’s possible to create shell aliases for long command lines.
  • Yes, I know that the find command lines should be using find -print0 and xargs -0.
  • Yes, I know that some shells have globbing like **/*.{pl,pm}.

Note: I’ve heard from a zsh user that time doesn’t work for this because it’s a shell function, but /usr/bin/time does work.

Thanks for your help! I’ll report on results in a future post.