ack 2.0 has been released

April 19, 2013 Open source, Programming, Unix 6 comments ,

ack 2.0 has been released. ack is a grep-like search tool that has been optimized for searching large heterogeneous trees of source code.

ack has been around since 2005. Since then it has become very popular and is packaged by all the major Linux distributions. It is cross-platform and pure Perl, so will run on Windows easily. See the “Why ack?” page for the top ten reasons, and dozens of testimonials.

ack 2.0 has many changes from 1.x, but here are four big differences and features that long-time ack 1.x users should be aware of.

  • By default all text files are searched, not just files with types that ack recognizes. If you prefer the old ack 1.x behavior of only searching files that ack recognizes, you can use the -k/--known-types option.
  • There is a much more flexible type identification system available. You can specify a file type based on extension (.rb for Ruby), filename (Rakefile is a Ruby file), first line matching a regex (Matching /#!.+ruby/ is a Ruby file) or regex match on the filename itself.
  • Greater support for ackrc files. You can have a system-wide ackrc at /etc/ackrc, a user-specific ackrc in ~/.ackrc, and ackrc files local to your projects.
  • The -x argument tells ack to read the list of files to search from stdin, much like xargs. This lets you do things like git ls | ack -x foo and ack will search every file in the git repository, and only those files that appear in the repository.

On the horizon, we see creating a framework that will let authors create ack plugins in Perl to allow flexibility. You might create a plugin that allows searching through zip files, or reading text from an Excel spreadsheet, or a web page.

ack has always thrived on numerous contributions from the ack community, but I especially want to single out Rob Hoelz for his work over the past year or two. If it were not for Rob, ack 2.0 might never have seen the light of day, and for that I am grateful.

A final note: In the past, ack’s home page was betterthangrep.com. With the release of ack 2.0, I’ve changed to beyondgrep.com. “Beyond” feels less adversarial than “better than”, and implies moving forward as well as upward. beyondgrep.com also includes a page of other tools that go beyond the capabilities of grep when searching source code.

For long time ack users, I hope you enjoy ack 2.0 and that it makes your programming life easier and more enjoyable. If you’ve never used ack, give it a try.

How to prepare for a job interview: The 4-point summary

March 7, 2013 Interviews, Job hunting 2 comments

The core of your preparation for the job interview:

  1. Learn what they do.
  2. Learn how they do what they do.
  3. Figure out exactly what skills, experience and background you have that will help them do what they do faster and cheaper.
  4. Plan how you’re going to explain #3 to them.

Everything else is implementation details.

You should have the first three figured out before you even send a resume. If you don’t have what it takes to help them do it cheaper and faster, then don’t waste your time applying for the job.

Tell me about weird, frustrating or bad job interview questions you’ve been asked.

January 17, 2013 Interviews 4 comments

I’m working on an article for SmartBear on handling bad or weird job interview questions, and I’d like to get input from you.  Have you been asked weird, insulting, inexplicable or just plain bad questions in a job interview?  Please let me know about them, and where you were interviewing, or at least what type of company and job was involved if you don’t want to name names.  I want to include real examples in my article and then include suggestions on how to effectively answer them.  I’ll also be discussing alternatives to these bad questions that get at what (I suspect) the interviewer is getting at.  I’m looking for first-hand accounts rather than questions you might have heard a friend talking about.

I’m sure many of you have had estimation questions like “How many light bulbs are there in the city of Chicago?”.  I don’t see those as weird if you’re interviewing at Google, where estimation and scaling are core competencies, but may be in other other contexts.  Have you been asked these sorts of questions elsewhere?  I get a sense from reading things online that these are asked by managers who think they’re cool questions, but without a business need for asking.

Please let me know in the comments, or email me at andy@petdance.com.

Solr’s DataImportHandler can’t handle line-based SQL comments

September 13, 2012 Open source, Programming No comments , ,

At least twice now I’ve run into this problem where I try to comment my SQL code, but doing so makes my Solr data importer blow up.  I post it here for posterity.

Part of your DIH configuration will be at least one entity, probably with SQL code like this:

<entity name="nodes" dataSource="jdbc""
    query="
        SELECT
            foo,
            bar
        FROM blah_blah
    ">

And maybe part of the SQL query isn’t obvious, so you want to add a comment like

<entity name="nodes" dataSource="jdbc""
    query="
        SELECT
            foo, -- We need the foo so we can fribble the wibbitz
            bar
        FROM blah_blah
    ">

But that blows up because the DIH strips linefeeds from your SQL code before passing it to the server.  This means that the SQL code you’re passing looks like this:

SELECT foo, -- We need the foo so we can fribble the wibbitz bar FROM blah_blah

Your line-based comment has wiped out the rest of your SQL query.  So what you have to do is use C-style comments

<entity name="nodes" dataSource="jdbc""
    query="
        SELECT
            foo, /* We need the foo so we can fribble the wibbitz */
            bar
        FROM blah_blah
    ">

Chances are your database supports C-style comments, according to this post on StackOverflow:

C style comments are standard in SQL 2003 and SQL 2008 (but not in SQL 1999 or before). The following DBMS all support C style comments:

  • Informix
  • PostgreSQL
  • MySQL
  • Oracle
  • DB2
  • Sybase
  • Ingres
  • Microsoft SQL Server
  • SQLite (3.7.2 and later)

That is not every possible DBMS, but it is more or less every major SQL DBMS.

Slides from today’s resumes & interviews talk

August 22, 2012 Job hunting 1 comment

This morning I gave a presentation titled Resumes & Interviews From the Hiring Manager’s Perspective at the Career TOOLS Conference in Milwaukee, WI.

Big lesson learned: Even when the conference says they’re providing the laptops already set up, bring your own slide clicker, in case you’re on a big stage in an auditorium, and the laptop is in the orchestra pit, and they don’t have a clicker for you.

That technical problem aside, solved by a having a human slide clicker and hand signals, it was a good conference and I hope some people got some ideas to help them in their job searches.

A little Ruby program to monitor Solr DIH imports

July 11, 2012 Open source, Programming No comments , , ,

Solr is a text indexing package. All interaction with it is through GETting and POSTting to the service, and then XML responses.

After you do the GET to start an import with Solr’s DataImportHandler, you have to check a status URL, and Solr gives a response like this:

<response>
    <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
    </lst>
    <lst name="initArgs">
        <lst name="defaults">
            <str name="config">jdbc.xml</str>
        </lst>
    </lst>
    <str name="command">status</str>
    <str name="status">busy</str>
    <str name="importResponse">A command is still running...</str>
    <lst name="statusMessages">
        <str name="Time Elapsed">0:0:4.545</str>
        <str name="Total Requests made to DataSource">1</str>
        <str name="Total Rows Fetched">36262</str>
        <str name="Total Documents Processed">36261</str>
        <str name="Total Documents Skipped">0</str>
        <str name="Full Dump Started">2012-07-11 09:31:03</str>
    </lst>
    <str name="WARNING">This response format is experimental.  It is likely to change in the future.</str>
</response>

And then after a while when you check the status URL, the response looks like this:

<response>
    <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
    </lst>
    <lst name="initArgs">
        <lst name="defaults">
            <str name="config">jdbc.xml</str>
        </lst>
    </lst>
    <str name="command">status</str>
    <str name="status">idle</str>
    <str name="importResponse"/>
    <lst name="statusMessages">
        <str name="Total Requests made to DataSource">1</str>
        <str name="Total Rows Fetched">1000000</str>
        <str name="Total Documents Skipped">0</str>
        <str name="Full Dump Started">2012-07-11 09:23:30</str>
        <str name="">Indexing completed. Added/Updated: 1000000 documents. Deleted 0 documents.</str>
        <str name="Committed">2012-07-11 09:26:01</str>
        <str name="Total Documents Processed">1000000</str>
        <str name="Time taken">0:2:31.95</str>
    </lst>
    <str name="WARNING">This response format is experimental.  It is likely to change in the future.</str>
</response>

But when does it finish? There’s no way to tell other than hitting that status URL and watching for it to change. I needed a tool to tell me when importing had finished, so I could use it in my makefile. It just has to check the status until it’s completed, and then exit.

So, I wrote a little program to do the monitoring, using Ruby and the Nokogiri library. Nokogiri is a web client similar to Perl’s WWW::Mechanize, with built-in XPath and CSS selector capabilities.

#!/usr/bin/ruby

require 'rubygems'
require 'nokogiri'
require 'open-uri'

while true
    doc = Nokogiri::XML( open( 'http://hostname:8080/solr/db/dih?command=status' ) )

    # If it's still running, this status will say something like "A process is still running..."
    # The status turns blank when the process has stopped.
    status = doc.xpath( '//response/str[@name="importResponse"]' ).inner_text
    if ( status == '' )
        break
    end

    # Get the import process's elapsed time and record count and display then
    time_elapsed   = doc.xpath( '//response/lst[@name = "statusMessages"]/str[@name = "Time Elapsed"]' ).inner_text
    docs_processed = doc.xpath( '//response/lst[@name = "statusMessages"]/str[@name = "Total Documents Processed"]' ).inner_text
    puts docs_processed + ' documents in ' + time_elapsed + ' seconds'

    sleep(2)
end

I’m not much of a Ruby guy, but this was pretty simple to write. Most of my time was looking at Nokogiri’s method listings and reacquainting myself with XPath syntax. The one Ruby gotcha I found was that before Ruby 1.9, if your program uses any Ruby gems, you have to put require 'rubygems' before you require any other gems.

SELECT * is a bug waiting to happen

July 10, 2012 Programming No comments , ,

A SQL SELECT statement that use * instead of an explicit column list is a bug waiting to happen.  Beyond the quick-and-dirty prototyping stage, every SQL query in an application should explicitly specify the columns it needs to protect against future changes.

Say you’ve got a table and code like this:

USERS table:
id integer NOT NULL
name varchar(100) NOT NULL
mail varchar(100)

my $query = perform_select( 'select * from users' );
while ( my $row = $query->fetch_next ) {
    if ( defined($row{mail}) ) {
        # do something to send user mail
    }
}

Later on, someone goes and renames the users.mail column to users.email. Your program will never know it. The email branch will just never execute.

Here’s another example. Say you’ve got that users table joining to departments, like so

users table:
id integer NOT NULL
name varchar(100) NOT NULL
email varchar(100)
deptid integer

dept table:
id integer NOT NULL
deptname varchar(100) NOT NULL

SELECT *
FROM users u JOIN dept d ON (u.deptid = d.id)

So your selects come back with id, name, email, deptid, id, deptname. You’ve got “id” in there twice. How does your DB layer handle that situation? Which “id” column takes precedence? That’s not something I want to have to spend brain cycles thinking about.

You should even specify which table each columns come from. For example, say you don’t want the IDs and you just specify the columns you want. So you write something like this:

SELECT name, email, deptname
FROM users u JOIN dept d ON (u.deptid = d.id)

Later on, someone adds an email column to the dept table. Now, your “SELECT name, email, deptname” is making an ambiguous column reference to “email”. If you specify everything fully:

SELECT u.name, u.email, d.deptname
FROM users u JOIN dept d ON (u.deptid = d.id)

then you’re future-proof.

Of course, this rule doesn’t apply to code that is dealing with columns in aggregate. If you’re writing a utility that deals with all columns in a row and transforms them somehow as a group, then no, you don’t need to specify columns.

Aside from the potential bugs, I also think it’s important to be clear to the human reader of your code what exactly you’re pulling from the database. SELECT * makes it a guessing game. Which of these makes it more obvious to the reader what I’m doing?

SELECT * FROM users;

or

SELECT first_name, last_name, email_addr FROM users;

There are also all sorts of speed reasons to specify columns. You reduce the amount of work fetching data from the disk, and your DBMS may not even have to fetch rows from disk if the data is covered in an index. For discussion of the performance issues, see this StackOverflow thread. One thing to remember: Your code will never be slower if you specify columns. It can ONLY be faster.

The speedups are secondary, however. I want to write my queries to be resistant to future change. I don’t mind making a few extra keystrokes to make that happen. That’s why I always specify columns in my SELECTs.

Self-selecting for the thick-skinned means turning away contributors.

May 29, 2012 Open source 3 comments , , ,

Every so often, usually in the middle of an online argument or flame war, someone will say that the climate of the group has him or her uncomfortable. He’ll say something like “I don’t want to be around all this hostility” or, worst of all, “This makes me not want to get involved.” The reply sometimes comes back “You’re just thin-skinned.”

Labeling someone as “thin-skinned” makes no sense. There is no measure of skin thickness. When someone says “You are thin-skinned,” he’s really saying “You are less willing to put up with anti-social behavior than I am.”

I wonder what the speaker hopes for “You’re just thin-skinned” to do. Is that supposed to inspire the listener? Make him realize the error of his ways? I don’t know what the intent is, but it communicates “You are wrong to feel that way” and that’s hurtful, not helpful. There’s nothing wrong with not wanting to put up with anti-social behavior.

None of this is an endorsement of being easily offended, however you may define “easily.” I wish we all had the attitude of Gina Trapani, who once said “I eat your sexist comments for breakfast. YUM.” But not everyone does, and that’s no reason to shut them out. Yes, online communities can get hostile, but that doesn’t mean we need to tacitly endorse that hostility. We can do better, and we should, to help our communities grow and thrive.

Aside from ignoring the aspect of treating other humans with compassion, it makes no sense to ignore or insult those you see as thin-skinned. Ricardo Signes recalled a lightning talk at OSCON 2011 where someone noted “When we say that this community requires a thick skin, it means we’re self-selecting for only people with thick skin.”

Self-selecting for the thick-skinned means turning away contributors. If you were running a restaurant, and a customer said “I like the food here, but my waiter was rude to me,” the sensible restaurateur would take this as an opportunity for improvement. You’d thank the patron for bringing it to your attention. You wouldn’t say “Well, that’s just the way it is here” or “You’re just too sensitive.” The wise restaurateur would see it as an opportunity for improvement.

There’s an adage in business that for every customer complaint you get, there are between ten to 100 other dissatisfied customers that don’t say anything and go somewhere else. This is especially so in the case of those tarred as “thin-skinned” by someone in the community. For every person who speaks up and says “I don’t like this hostility”, how many more unsubscribe from the list, leave the IRC channel or vow not to come back to the user group meeting again, all without saying a word about it?

In online communities, we’re not dealing with an owner-customer relationship, but nonetheless contributors to the community are a scarce commodity. A business owner can’t afford to turn away customers. Is your online community or open source project so flush with talent that you can turn away contributors?

My Solr+Tomcat troubles, and how I fixed them

May 22, 2012 Open source, Programming 8 comments , ,

I’ve been working at getting Solr working under Tomcat, and spent most of a day working on fixing these problems. The fixes didn’t take so much time as the trying to grok the Java app ecosystem.

My Solr install worked well. I was able to import records and search them through the interface. Where I ran into trouble was with the Velocity search browser that comes with Solr.

I’m documenting my troubles and their solutions here because otherwise they won’t exist on the web for people to find. Putting solutions to problems on the web makes them findable for the next poor guy who has the same problem. I figure that if I spend a day working on fixing problems, I can spend another hour publishing them so others can benefit.

These are for Solr 3.5 running under Tomcat 6.0.24.

Unable to open velocity.log

Velocity tries to create a file velocity.log and gets a permission failure.

HTTP Status 500 - org.apache.velocity.exception.VelocityException:
Failed to initialize an instance of
org.apache.velocity.runtime.log.Log4JLogChute with the current
runtime configuration. java.lang.RuntimeException:
org.apache.velocity.exception.VelocityException: Failed to initialize
an instance of org.apache.velocity.runtime.log.Log4JLogChute with
the current runtime configuration. at
...
Caused by: java.io.FileNotFoundException: velocity.log
(Permission denied) at java.io.FileOutputStream.openAppend(Native
Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:207)
...

But where is it trying to create the file? What directory? Since no pathname was specified, it seemed that the file would be created in the current working directory of Tomcat. What would that be?

First I had to figure out what process that Tomcat was running as:

frisbee:~ $ ps aux | grep tomcat
tomcat     498  0.6  1.3 6240056 214880 ?      Sl   09:27   0:10 /usr/lib/jvm/java/bin/java ....

In this case, it’s PID 498. So we go to the /proc/498 directory and see what’s in there.

frisbee:~ $ cd /proc/498
frisbee:/proc/498 $ ls -al
ls: cannot read symbolic link cwd: Permission denied
ls: cannot read symbolic link root: Permission denied
ls: cannot read symbolic link exe: Permission denied
total 0
dr-xr-xr-x   7 tomcat tomcat 0 May 22 09:27 ./
dr-xr-xr-x 173 root   root   0 May 17 11:33 ../
dr-xr-xr-x   2 tomcat tomcat 0 May 22 09:58 attr/
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 autogroup
-r--------   1 tomcat tomcat 0 May 22 09:58 auxv
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cgroup
--w-------   1 tomcat tomcat 0 May 22 09:58 clear_refs
-r--r--r--   1 tomcat tomcat 0 May 22 09:56 cmdline
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 coredump_filter
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cpuset
lrwxrwxrwx   1 tomcat tomcat 0 May 22 09:58 cwd
...

We can see that cwd is a symlink to a directory, but we have to be root to see what the target directory is. I have to run ls again as root.

frisbee:/proc/498 $ sudo ls -al
[sudo] password for alester:
total 0
dr-xr-xr-x   7 tomcat tomcat 0 May 22 09:27 .
dr-xr-xr-x 174 root   root   0 May 17 11:33 ..
dr-xr-xr-x   2 tomcat tomcat 0 May 22 09:58 attr
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 autogroup
-r--------   1 tomcat tomcat 0 May 22 09:58 auxv
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cgroup
--w-------   1 tomcat tomcat 0 May 22 09:58 clear_refs
-r--r--r--   1 tomcat tomcat 0 May 22 09:56 cmdline
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 coredump_filter
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cpuset
lrwxrwxrwx   1 tomcat tomcat 0 May 22 09:58 cwd -> /usr/share/tomcat6

I could also have used the stat command.

frisbee:/proc/498 $ sudo stat cwd
File: `cwd' -> `/usr/share/tomcat6'
Size: 0               Blocks: 0          IO Block: 1024   symbolic link
Device: 3h/3d   Inode: 100017      Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (   91/  tomcat)   Gid: (   91/  tomcat)
Access: 2012-05-22 09:58:17.131009458 -0500
Modify: 2012-05-22 09:58:17.130009715 -0500
Change: 2012-05-22 09:58:17.130009715 -0500

So we find that the CWD is /usr/share/tomcat6. I don’t want the tomcat user to have rights to that directory, so instead I create a velocity.log file in a proper log directory and then symlink
to it.

frisbee:/proc/498 $ cd /var/log/tomcat6
frisbee:/var/log/tomcat6 $ sudo touch velocity.log
frisbee:/var/log/tomcat6 $ sudo chown tomcat:tomcat velocity.log
frisbee:/var/log/tomcat6 $ cd /usr/share/tomcat6
frisbee:/usr/share/tomcat6 $ sudo ln -s /var/log/tomcat6/velocity.log velocity.log

Now the app is able to open /usr/share/tomcat6/velocity.log without error.

log4j error

Once I created a log file Velocity could write to, it stared throwing an error with log4j. log4j is the Java logging package.

org.apache.log4j.Logger.setAdditivity(Z)V java.lang.NoSuchMethodError:
org.apache.log4j.Logger.setAdditivity(Z)V at
org.apache.velocity.runtime.log.Log4JLogChute.initAppender(Log4JLogChute.java:126) at
org.apache.velocity.runtime.log.Log4JLogChute.init(Log4JLogChute.java:85) at
org.apache.velocity.runtime.log.LogManager.createLogChute(LogManager.java:157) at
org.apache.velocity.runtime.log.LogManager.updateLog(LogManager.java:255) at
org.apache.velocity.runtime.RuntimeInstance.initializeLog(RuntimeInstance.java:795) at
org.apache.velocity.runtime.RuntimeInstance.init(RuntimeInstance.java:250) at
org.apache.velocity.app.VelocityEngine.init(VelocityEngine.java:107) at
org.apache.solr.response.VelocityResponseWriter.getEngine(VelocityResponseWriter.java:132) at
org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:40) at
org.apache.solr.core.SolrCore$LazyQueryResponseWriterWrapper.write(SolrCore.java:1774) at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:352) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:273) at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555) at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857) at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:679)

In searching the web for this error, I found this ticket in the Solr bug tracker that says that the log4j .jar files should be removed from the Solr tarball, because they can conflict with existing .jars on the system. That conflict was exactly the error I was getting.

I wanted to remove the extra .jar files, so I used locate to search my system for any log4j .jars. Indeed, there was one installed with solr:

frisbee:~ $ locate log4j
...
/var/lib/tomcat6/webapps/solr/WEB-INF/lib/log4j-over-slf4j-1.6.1.jar
...

So I just changed the extension of the file so it wouldn’t get loaded as a .jar.

frisbee:~ $ sudo mv /var/lib/tomcat6/webapps/solr/WEB-INF/lib/log4j-over-slf4j-1.6.1.{jar,jarx}

Now Velocity loads beautifully. Now the real work starts: Configuration of Velocity to understand the schema in my Solr core.

I hope this helps someone in the future!

Rethink the post-interview thank you note

May 15, 2012 Interviews 3 comments , ,

Good golly do people get riled up by the idea of sending a thank you note after a job interview. “Why should I thank them, they didn’t give me a gift!” is a common refrain in /r/jobs.  “They should be thanking me!”

I think the big problem is the name, “thank you note.”  It makes us recall being forced to say nice things about the horrible sweater Aunt Margaret gave us for Christmas.

It’s not a thank you note. It’s a followup. It doesn’t have to be any more than this:

Dear Mr. Manager,

Thank you for the opportunity to meet with you today. I enjoyed the interview and tour and discussing your database administration needs. Based on our discussions with Peter Programmer, I’m sure that my PostgreSQL database administration skills would be a valuable addition to the Yoyodyne team. I look forward to hearing from you.

Sincerely,
Susan Candidate.

There’s nothing odious there. You’re not fawning or begging. You’re thanking the interviewer for his time, reminding him of key parts of the interview and your key skills, and reasserting that you are interested in the job. (And before you say “Of course I’m interested, I went to the interview!”, know that perceived indifference and/or lack of enthusiasm is an interview killer.)

People ask “Do I really have to do that?” and I say “No, you don’t HAVE to, you GET to.” It’s not a chore, it’s an opportunity.