Archive for the ‘cli’ Category

I was curious to know how many books, on average, I read a month. I don’t expose this information directly on bookpiles either. You could extract it from the RSS feed. It’s one the features I would like to add when I understand what information I want to present and how it is best presented.

In the meantime, I ran a query in the database and came up with this:

2009-01 3
2009-02 2
2009-03 4
2009-04 3
2009-05 3
2009-06 3
2009-07 2
2009-08 3
2009-09 3
2009-10 3
2009-11 4
2009-12 1
2010-01 0
2010-02 1
2010-03 1
2010-04 6
2010-05 2
2010-06 5
2010-07 7
2010-08 2
2010-09 7
2010-10 4

I felt 80% done. Then, I realized I didn’t quite know how I would extract, from the command-line, the sum, mean, standard deviation, minimum and maximum value. Of course, I could run it through R. Or Excel… The question wasn’t how to do statistics in general — it was how to do it as a filter … easily … right now.

A little research didn’t turn out any obvious answer. (please, correct me if I missed an obvious solution)

I wrote my own in awk. (awk is present on ALL the machines I use)

min == "" {min=max=$1}
$1 < min  {min = $1}
$1 > max  {max = $1}
          {sum+=$1; sumsq+=$1*$1}
  print "lines: ", NR;
  print "min:   ", min;
  print "max:   ", max;
  print "sum:   ", sum;
  print "mean:  ", sum/NR;
  print "stddev:", sqrt(sumsq/NR – (sum/NR)**2)

Here’s what the output looks like:

I included it in my dotfiles: the awk code and a bootstrap shell script (used above).

Read Full Post »

Managing PATH and MANPATH


My PATH variable used to be a mess. I have used UNIX-like systems for 10 years and have carried around my configuration files in one form or another since then.

Think about Solaris, think about /opt (or /sw), and change the order based on different requirements.

I have seen a lot of people devise clever if-then-else logic with OS-detection. I have seen yet other, myself included, who tried to aim for the most comprehensive and all-inclusive PATH.

In the end, all that matters is that when you type a command, it IS in your path


As for MANPATH, the situation was even worse. I used to depend (and hope) that the OS-inherited MANPATH contained everything I needed. For a long time, I didn’t bother to set it right and just googled for the man pages if/when I was in need.

Invoking man for something I just installed often meant getting no help at all.

Where to look?

When it comes to bin and share/man directories, there are a handful of predictable places to look for. For PATH:

  • /usr/X11/bin
  • /bin
  • /sbin
  • /usr/bin
  • /usr/sbin
  • /usr/local/bin
  • /opt/local/bin
  • /opt/local/sbin

Notice the bin and sbin combinations. And for MANPATH:

  • /usr/X11/share/man
  • /usr/share/man
  • /usr/local/share/man
  • /opt/local/share/man

It should be clear that there is a lot of duplication there. Also, if you change the order of your PATH, you should probably change the order of your MANPATH so that the command you get the man page for is the command invoked by your shell. The GNU man pages are not very useful when you are using the BSD commands, on Darwin, for example.

A solution

Here’s the plan:

  1. Clear both PATH and MANPATH.
  2. Given a path, detect the presence of a bin, sbin and share/man subdirectories.
  3. Prepend the existing directories from step 2 to both PATH and MANPATH (as appropriate).

What you get:

  • Only existing paths go in PATH and MANPATH. No more just-in-case™ and for-some-other-OS™ paths polluting your variables.
  • Order of the paths is the same for both PATH and MANPATH. If you change the order in one, the order is changed for the other.
  • Easier to read configuration files. Colon-separated lists are no fun to parse visually.

Here’s something you can put in your .bashrc

# prepend_colon(val, var)
prepend_colon() {
  if [ -z "$2" ]; then
    echo $1
    echo $1:$2

# unshift_path(path)
unshift_path() {
  if [ -d $1/sbin ]; then
    export PATH=$(prepend_colon "$1/sbin" $PATH)
  if [ -d $1/bin ]; then
    export PATH=$(prepend_colon "$1/bin" $PATH)

  if [ -d $1/share/man ]; then
    export MANPATH=$(prepend_colon "$1/share/man" $MANPATH)

export PATH=""
export MANPATH=""

unshift_path "/usr/X11"
unshift_path ""
unshift_path "/usr"
unshift_path "/usr/local"
unshift_path "/opt/local"
unshift_path "$HOME/local"
unshift_path "$HOME/etc"

export PATH=$(prepend_colon ".local" $PATH)


I use $HOME/local to store machine-specific binaries/scripts. For example, that’s where I install homebrew on Mac OS X. That’s also where I would put cron scripts or other “I just use this script on this machine” type of things.

I use $HOME/etc to store binaries I carry around with my configuration files. That’s where I clone my dotfiles project.

Finally, the relative path .local is an interesting hack. It allows for directory-specific binaries. This solves the “I just use this script when I’m in that directory” problem. This trick is discussed in this blog post.

Read Full Post »

I have a bunch of shell/ruby scripts in a directory that I include into my PATH variable. The scripts live there because I don’t have a better location to put them. That’s fine when scripts are general enough to be used anywhere.

Some scripts are not so general and are meant to interact only with a few specific files and directories.

Putting things in perspective: this is a discussion about global variables and local variables when it comes to Bash. You want your variables to be scoped only that was is needed and not more.

Here’s an insight:

Include a .local directory in your PATH.

For example:

forgetful, a project I maintain that implements the Supermemo algorithm, takes CSV files as input. I could use a spreadsheet manipulate these files, but I prefer to use Vim and a bunch of unixy scripts to do what I want. In the directory where I keep my CSV files, I created a subdirectory called .local (could be called anything). When I’m in the directory, Bash will include the .local subdirectory when looking for executables … in essence, I get my “local” executables.

Notice how there’s a Rakefile in that directory? I think that’s a workaround that a lot of people end up using. I’ll probably strip out most of what the Rakefile is doing and shove it to the .local directory.

Read Full Post »

Using vim as a pager

I’ve talked casually about using Vim as a pager before. However, I’m still surprised to see how many people use Vim regularly and don’t know about this feature.

Here’s a quote straight from vim --help

vim [arguments] -               read text from stdin

Admittedly, it’s easy to overlook the hyphen in the explanation.

vim hyphen

Why Vim as a Pager?

If you’re using Vim already, there’s nothing else to install.

If you’re using Vim already, it’s already configured the way you like it.

More importanly, Vim detects the kind of file it is being piped and turns on the appropriate syntax highlighting. Why page in black and white? In this case, “less” is definitely less!

Improving the experience

As a pager, you want to use Vim in read-only mode.

some command | vim -R -

What the difference? Vim doesn’t ask you to save the file if you try to quit. Of course, you can still modify and write the file … the -R flag is just a more reasonable pager default.

PAGER variable and ANSI Escape Sequences

You probably don’t want to set the PAGER variable. Vim doesn’t understand ANSI escape sequences. As such, a command like “man vim | vim -R -” won’t show colors; it will show escape sequences.

vim and ansi

I haven’t found any quick and simple solution to make Vim show ANSI escape sequences, but it’s pretty easy to strip them out before passing the file to Vim:

man vim | col -b | vim -R -

I use less as PAGER. I use vim in explicit cases.


The view command gets installed at the same time as vim. It’s just a symlink to vim. Using view is exactly like typing vim -R.

There’s a certain aesthetic in:

some command | view -

But I find that typing vim -R - is easier on my finger’s muscle memory.

Read Full Post »

My very customized bash prompt

Bash’s $PS1 variable is what you see every time you get a prompt. It’s there, waiting for you to type something.

Some people go minimal … maybe just “$” like the Bourne shell. Others go crazy and cram as much information as possible … on multiple lines … in colors.

My $PS1 falls somewhere in the middle, but I realized today it was time for a change. One of the things I wanted was the current path. A lot of people like to put that in the “xterm title bar”, but I wanted it closer to the action.

After a bit of experimentation, I found that I could play with the PROMPT_COMMAND to faux-position the path in pale gray ON the same line as the prompt.

Here’s my variables.sh file which I put under ~/etc/bash/local/ — it works with the rest of my config files:

function prompt_command() {   

  printf \e[30;40;1m%*s\n\e[0m\e[1A $COLUMNS $PWD




export LANG=”en_US.UTF-8

export LC_ALL=”en_US.UTF-8

export TERM=”xterm-256color

export FIND_OPTIONS=”-name .git -prune -o -name .hg -prune -o

The prompt_command function uses “printf” to print ($COLUMNS wide) the $PWD. There are escapes sequences to color it gray. Finally, the “\e[1A” sequence moves the cursor 1 line up. Consequently, the prompt itself prints on the same line as the PROMPT_COMMAND.

It looks something like:


What do you put in your $PS1?

Read Full Post »

Rotating GNU Screens

I often have many open screen windows to monitor a machine: “top”, “tail -f …”, “thin …” and so on.

Wouldn’t it be great if you could open a terminal and have these screens rotated automatically?

I wrote this simple shell function: (to include in your .bashrc)

function screen_rotate() {
  local session_name=${1:?"missing session name"}
  local sleep_duration=${2:-5}

  while true; do
    screen -S $session_name -X next
    sleep $sleep_duration

The first argument is the screen session name. The second, optional, argument is the time spent on each screen window.

Read Full Post »

A year ago, I posted Like Slime, for Vim. There was a lot of interest in sticking with Vim but having a way to get something similar to Slime.

I tried to explain what the plugin was all about … but I always felt it would be better served by a screencast. Here’s what I came up with:

(Watch it bigger)

In short:

  • you can control GNU screen from the command-line
  • vim can, therefore, control GNU screen
  • you type in vim, type C-c C-c, and it appears in your screen session
  • just run clojure (bash, ruby, scala, …) in screen

Read the original post for more details. To save you time, here’s the Vim plugin: slime.vim.

Read Full Post »

Older Posts »