Feeds:
Posts
Comments

A few years ago, when I was more into python, I stumbled on python challenge. It was great fun, I learned a bunch of stuff and it forced me to play with libraries I wasn’t familiar with.

In their own words: (about)

Python Challenge is a game in which each level can be solved by a bit of (Python) programming.

The Python Challenge was written by Nadav Samet.

All levels can be solved by straightforward and very short1 scripts.

Python Challenge welcomes programmers of all languages. You will be able to solve most riddles in any programming language, but some of them will require Python.

Sometimes you’ll need extra modules. All can be downloaded for free from the internet.

It is just for fun – nothing waits for you at the end.

Keep the scripts you write – they might become useful.

Learning

People who know me know that I read a lot. I am, therefore, painfully aware that books are not the best way to learn things.

What is the best way to learn something?

To be honest … I don’t know. Not books, not screencasts… There are better ways: a one-on-one session, pair programming. On the side of DOING there is always: DOING more. Read more code, program more, release more.

Doing

In the spirit of the python challenge, I released today Command-line One-liner Challenges.

The idea is not exactly the same: I do provide solutions and you are allowed to move on based on your interests (or frustration).

I strive to make each challenge look and feel like any other challenge. The directory structure will be something like this:
directory structure
Each challenge is its own directory (numbered). Inside it, you can find a very short instructions.txt file. There are 2 subdirectories: problem and solution. Those subdirectories should be the same except for the content of the compare.sh file.

Look at the input.txt file. Then, look at the expected.txt file. Imagine how, as a one-liner, you could transform input.txt into expected.txt.

You are supposed to run compare.sh. Just open it and fill in the blanks, so to speak.

#!/bin/sh

convert() {
  cat "$@"
}

convert input.txt > actual.txt

${DIFF:-diff -q} actual.txt expected.txt

The project’s README contains more information.

Comments

This is meant to be fun. Clone the repository and give it a go. I’m going to push more challenges over time, you might want to watch the repo.

If you have a better solution than what I provide, please send it to me, I’ll find a way to include it in the project.

Also, if you have an idea for a challenge, let me know.

How MANPATH works

Just after I was done writing Managing PATH and MANPATH, I stumbled on “man man” and put to rest the mysteries of MANPATH.

How it works

If MANPATH is defined, it will be used to lookup man pages.

If MANPATH is NOT defined, the manpath config file is going to be used. Depending on the OS your are using, it might be something like /etc/man.conf (Mac OS X) or /etc/manpath.config (Ubuntu).

Here’s an excerpt for “man man” (Mac OS X):

It says there’s a command line flag to override MANPATH (-M), but I don’t think that’s excessively useful.

Here’s something useful: when you aren’t sure what man page you’re going to get, try the -w flag.

PATH

My PATH variable used to be a mess. I have used UNIX-like systems for 10 years and have carried around my configuration files in one form or another since then.

Think about Solaris, think about /opt (or /sw), and change the order based on different requirements.

I have seen a lot of people devise clever if-then-else logic with OS-detection. I have seen yet other, myself included, who tried to aim for the most comprehensive and all-inclusive PATH.

In the end, all that matters is that when you type a command, it IS in your path

MANPATH

As for MANPATH, the situation was even worse. I used to depend (and hope) that the OS-inherited MANPATH contained everything I needed. For a long time, I didn’t bother to set it right and just googled for the man pages if/when I was in need.

Invoking man for something I just installed often meant getting no help at all.

Where to look?

When it comes to bin and share/man directories, there are a handful of predictable places to look for. For PATH:

  • /usr/X11/bin
  • /bin
  • /sbin
  • /usr/bin
  • /usr/sbin
  • /usr/local/bin
  • /opt/local/bin
  • /opt/local/sbin

Notice the bin and sbin combinations. And for MANPATH:

  • /usr/X11/share/man
  • /usr/share/man
  • /usr/local/share/man
  • /opt/local/share/man

It should be clear that there is a lot of duplication there. Also, if you change the order of your PATH, you should probably change the order of your MANPATH so that the command you get the man page for is the command invoked by your shell. The GNU man pages are not very useful when you are using the BSD commands, on Darwin, for example.

A solution

Here’s the plan:

  1. Clear both PATH and MANPATH.
  2. Given a path, detect the presence of a bin, sbin and share/man subdirectories.
  3. Prepend the existing directories from step 2 to both PATH and MANPATH (as appropriate).

What you get:

  • Only existing paths go in PATH and MANPATH. No more just-in-case™ and for-some-other-OS™ paths polluting your variables.
  • Order of the paths is the same for both PATH and MANPATH. If you change the order in one, the order is changed for the other.
  • Easier to read configuration files. Colon-separated lists are no fun to parse visually.

Here’s something you can put in your .bashrc


# prepend_colon(val, var)
prepend_colon() {
  if [ -z "$2" ]; then
    echo $1
  else
    echo $1:$2
  fi
}

# unshift_path(path)
unshift_path() {
  if [ -d $1/sbin ]; then
    export PATH=$(prepend_colon "$1/sbin" $PATH)
  fi
  if [ -d $1/bin ]; then
    export PATH=$(prepend_colon "$1/bin" $PATH)
  fi

  if [ -d $1/share/man ]; then
    export MANPATH=$(prepend_colon "$1/share/man" $MANPATH)
  fi
}

# TABULA RASA
export PATH=""
export MANPATH=""

unshift_path "/usr/X11"
unshift_path ""
unshift_path "/usr"
unshift_path "/usr/local"
unshift_path "/opt/local"
unshift_path "$HOME/local"
unshift_path "$HOME/etc"

export PATH=$(prepend_colon ".local" $PATH)

Notes

I use $HOME/local to store machine-specific binaries/scripts. For example, that’s where I install homebrew on Mac OS X. That’s also where I would put cron scripts or other “I just use this script on this machine” type of things.

I use $HOME/etc to store binaries I carry around with my configuration files. That’s where I clone my dotfiles project.

Finally, the relative path .local is an interesting hack. It allows for directory-specific binaries. This solves the “I just use this script when I’m in that directory” problem. This trick is discussed in this blog post.

Yesterday I reached into a project I had not touched in months. When I wrote that Ruby script, it was supposed to be a one-off effort, but, as it usually goes for things like these, it had ended sticking around for much longer than anticipated.

I have RVM installed and I had installed many Rubies and done all kinds of gem manipulations. In short, the “environment” in which that project had worked was gone.

I had the “require” statements to guide me:


require rubygems
require dm-core
require dm-timestamps

require json

However, that’s not the whole story. In this specific case, DataMapper requires
more gems based on the connection string you give it.

I think we have all tried this:

  1. try to run a script
  2. see what “require” crashed the whole thing
  3. install some gems (hopefully with the version needed)
  4. repeat

Isn’t Bundler supposed to solve that problem?

Bundler

I have used Bundler with Rails 3. But that’s all configured and just automagically works. In a standalone project, there are a few things you need to do yourself.

First:

> bundle init

All that command did was to create an empty Gemfile.

Open the Gemfile with your favorite editor and add your gem dependencies. Mine looked like this:


# A sample Gemfile
source :gemcutter

gem "dm-core"
gem "dm-timestamps"
gem "dm-sqlite-adapter"

gem "json"

Then, run:

> bundle install

So far, this is all regular Bundler stuff. What about your script?

Bundler knows about all your dependencies, surely it will “require” all I need, right?

Yes and … no.

Bundler Documentation Fail

Here’s a screenshot from Bundler’s documentation

Thank you Bundler, I “require” you and now … huh … I still require all the gems I need?! I doesn’t sound very DRY to me.

What the “bundler/setup” line did was to configure the load path.

And you could do your requires manually…

If I’m writing this it’s because there’s a way. I’m just surprised that the Bundler website doesn’t seem to document this useful feature. If there are good reasons why this is not documented (tradeoffs or something) or, even, the default behavior — we can only guess.

Here’s what your script should do:


require rubygems
require bundler/setup

Bundler.require

The “Bundler.require” line will require all your dependencies.

One last note, do lock (bundle lock) your Gemfile so that the dependency resolution phase is skipped. It will make loading your script much faster. (this also applies to Rails projects)

Reading a book takes time.

The time you spend reading a book is not spent doing something else.

Opportunity cost: (source)

Benefit, profit, or value of something that must be given up to acquire or achieve something else. Since every resource (land, money, time, etc.) can be put to alternative uses, every action, choice, or decision has an associated opportunity cost.

Does it sound obvious? Lately, however, I’ve made a few such mistakes with respect to some books I bought.

Atlas Shrugged

Atlas Shrugged has been recommended many times, by many different people. Consequently, it raised above my threshold of consciousness and I had decided to buy it, read it and reach my own conclusions about it.

I have been to Chapters and I had remembered about Atlas Shrugged. Of course, it was in the shelves and I looked through it. What sealed the deal was the price: $10! How could I go wrong?!

Where did I go wrong?

It could have been an audio book.

I’m writing this with the book on my lap. It stands at 1069 pages in something that feels like 6-pt font. More so than other books I’ve had, I will feel the impact of the time invested in reading it. Even in audio, Atlas Shrugged stands tall with 63 hours of narration.1

I bought the physical book with good intentions. However, it has been gathering dust for a while now. I wondered when, if ever, I would have enough time to decide to read it.

If only I had bought it in audio format. But it was too late now… I had already bought it in paper format. Buying the audio book meant paying “twice”. There was something very unpleasant about that thought.

Opportunity Cost

Then, a few days ago, I realized that the $15 it would cost me to buy the audio book was not completely lost.

It meant that I could spend the time I listened to the book doing other things: dishes, chores, exercising. It meant I could start the book right away, instead of some perfect moment in the future, because I could do other things at the same time.

Am I happy that I paid twice? No. But the total cost of the book, $25 ($10 paper + $15 audio), must be contrasted against the time I just saved by not having to sit down while reading.

I have a few other books with which I will have to repeat this process. Uncle Tom’s Cabin comes to mind. That was another “cheap” book I bought in the spur of the moment. I’m learning this lesson about the total cost of a book.

Notes

How to Nap

Key Insight:

Nap for exactly 20 minutes.

A Common Mistake

I’ve never had trouble sleeping. My problem would fall in the opposite category: by modern society’s standard, I sleep too much. I need at least 7–8 hours of sleep a night or I become a zombie; I can function but higher-level skills (concentration, insight, creativity…) are reduced.

If I lie down and try to nap, I’m usually gone for 2–3 hours. That’s a sizable part of a day! It’s neither desirable nor always (if ever) possible to disappear like that.

Simple! I have a kitchen timer and I set it for 1 hour. That’s reasonable, right?

When the alarm rings, I “wake” up. But all my body want is to go back to sleep. At this point, if I get up I feel terrible and it takes hours until I feel fine. If, instead, I go back to sleep, I’m gone for another hour or so.

I’ve also tried sleeping for 30 or 90 minutes without much success.

A Solution

I realized I was doing something wrong.

So, I googled:
google: how to nap

Beyond the platitudes like go to a quiet place, lie down, and close your eyes there were a few articles that talked about durations.

From about.com:

Sleep comes in five stages. If your nap takes you from stage 1 sleep (just drifting off) to stage 2 (brain activity slows), you will wake up feeling energized and more alert. If your nap takes you into stages 3 and 4 (deep sleep), you will not wake easily and will feel groggy and tired. Sleep stage 1 typically lasts about 10 minutes and stage 2 lasts another 10 minutes. That makes the 20-minute nap ideal for most people (your time will vary to some degree, experiment to learn what works best).

I’ve heard about 20-minute nap before but I’ve always dismissed it: “How can someone feel rested after only 20 minutes?!”

Also:

  • if I lie down and think for a few minutes, should I reset the timer to another 20 minutes?
  • (rephrased) Is it 20 minutes from the moment I fall asleep? How am I supposed to do that?!

Implicit in the quote above is the idea that lying down and “thinking time” are an integral part of the nap. As long as you’re trying to fall asleep and not actively entertaining these thoughts: good job you’re doing it right! In that way, it’s exactly like meditation.

My Experience

I was very skeptical of the 20-minute nap. But I’ve been trying this technique for the last 2 weeks and it worked every single time!

I set the timer and lie down. Usually, it takes me 4–5 minutes to wind down. And when the alarm goes off it feels like I just fell asleep. It feels like “huh… was I sleeping?”

Another common experience: I think I’m awake and wasting my precious 20 minutes but when the alarm goes off it doesn’t feel like 20 minutes … I just “time-warped”.

I have a bunch of shell/ruby scripts in a directory that I include into my PATH variable. The scripts live there because I don’t have a better location to put them. That’s fine when scripts are general enough to be used anywhere.

Some scripts are not so general and are meant to interact only with a few specific files and directories.

Putting things in perspective: this is a discussion about global variables and local variables when it comes to Bash. You want your variables to be scoped only that was is needed and not more.

Here’s an insight:

Include a .local directory in your PATH.

For example:

forgetful, a project I maintain that implements the Supermemo algorithm, takes CSV files as input. I could use a spreadsheet manipulate these files, but I prefer to use Vim and a bunch of unixy scripts to do what I want. In the directory where I keep my CSV files, I created a subdirectory called .local (could be called anything). When I’m in the directory, Bash will include the .local subdirectory when looking for executables … in essence, I get my “local” executables.

Notice how there’s a Rakefile in that directory? I think that’s a workaround that a lot of people end up using. I’ll probably strip out most of what the Rakefile is doing and shove it to the .local directory.