Feeds:
Posts
Comments

Archive for the ‘development’ Category

Bundler Without Rails

Yesterday I reached into a project I had not touched in months. When I wrote that Ruby script, it was supposed to be a one-off effort, but, as it usually goes for things like these, it had ended sticking around for much longer than anticipated.

I have RVM installed and I had installed many Rubies and done all kinds of gem manipulations. In short, the “environment” in which that project had worked was gone.

I had the “require” statements to guide me:


require rubygems
require dm-core
require dm-timestamps

require json

However, that’s not the whole story. In this specific case, DataMapper requires
more gems based on the connection string you give it.

I think we have all tried this:

  1. try to run a script
  2. see what “require” crashed the whole thing
  3. install some gems (hopefully with the version needed)
  4. repeat

Isn’t Bundler supposed to solve that problem?

Bundler

I have used Bundler with Rails 3. But that’s all configured and just automagically works. In a standalone project, there are a few things you need to do yourself.

First:

> bundle init

All that command did was to create an empty Gemfile.

Open the Gemfile with your favorite editor and add your gem dependencies. Mine looked like this:


# A sample Gemfile
source :gemcutter

gem "dm-core"
gem "dm-timestamps"
gem "dm-sqlite-adapter"

gem "json"

Then, run:

> bundle install

So far, this is all regular Bundler stuff. What about your script?

Bundler knows about all your dependencies, surely it will “require” all I need, right?

Yes and … no.

Bundler Documentation Fail

Here’s a screenshot from Bundler’s documentation

Thank you Bundler, I “require” you and now … huh … I still require all the gems I need?! I doesn’t sound very DRY to me.

What the “bundler/setup” line did was to configure the load path.

And you could do your requires manually…

If I’m writing this it’s because there’s a way. I’m just surprised that the Bundler website doesn’t seem to document this useful feature. If there are good reasons why this is not documented (tradeoffs or something) or, even, the default behavior — we can only guess.

Here’s what your script should do:


require rubygems
require bundler/setup

Bundler.require

The “Bundler.require” line will require all your dependencies.

One last note, do lock (bundle lock) your Gemfile so that the dependency resolution phase is skipped. It will make loading your script much faster. (this also applies to Rails projects)

Advertisements

Read Full Post »

I have a bunch of shell/ruby scripts in a directory that I include into my PATH variable. The scripts live there because I don’t have a better location to put them. That’s fine when scripts are general enough to be used anywhere.

Some scripts are not so general and are meant to interact only with a few specific files and directories.

Putting things in perspective: this is a discussion about global variables and local variables when it comes to Bash. You want your variables to be scoped only that was is needed and not more.

Here’s an insight:

Include a .local directory in your PATH.

For example:

forgetful, a project I maintain that implements the Supermemo algorithm, takes CSV files as input. I could use a spreadsheet manipulate these files, but I prefer to use Vim and a bunch of unixy scripts to do what I want. In the directory where I keep my CSV files, I created a subdirectory called .local (could be called anything). When I’m in the directory, Bash will include the .local subdirectory when looking for executables … in essence, I get my “local” executables.

Notice how there’s a Rakefile in that directory? I think that’s a workaround that a lot of people end up using. I’ll probably strip out most of what the Rakefile is doing and shove it to the .local directory.

Read Full Post »

UPDATE: With respect to terminology, check out Drew Neil’s comment below.

In Vim, I’ve been using splits for years. Splits are great:

  • view 2 files at the same time
  • view 2 parts of the same file at the same time
  • dump bits of text into a new split
  • dump command outputs into a new split
  • and so on…

However, I’ve been using the subset of splits that I understood while shying away from advanced use cases. Somewhere down my TODO list, there was an item called “understand Vim splits”. This blog post is an attempt to document what I discovered.

3 Questions

When it comes to splitting, there are, thankfully, only 3 questions:

  • are you splitting the buffer or the window?
  • are you splitting horizontal or vertical?
  • do you want to send the split left, right, up or down?

3 questions

When you type:

:split

You are using the defaults: buffer, horizontal, up.

There are 8 combinations:

window  horizontal  up      -->   :topleft    split
window  horizontal  down    -->   :botright   split
window  vertical    left    -->   :topleft    vsplit
window  vertical    right   -->   :botright   vsplit
buffer  horizontal  up      -->   :leftabove  split
buffer  horizontal  down    -->   :rightbelow split
buffer  vertical    left    -->   :leftabove  vsplit
buffer  vertical    right   -->   :rightbelow vsplit

What were they thinking?! Good time to give up? :-D

Illustrated

Look at the following picture. Starting from a initial state, follow what happens when you invoke these commands. (click to enlarge)


  • for this example, it doesn’t matter whether you’re using split/vsplit or new/vnew
  • the blue buffer is where your cursor is
  • the buffers are numbered to help locate them before and after

Even though I spent a few hours thinking about splits and studying the commands to eventually come up with that summary graph, I can’t say it’s the most intuitive set of commands around. If I stop everything I’m doing, I can mentally come up with the right command but it’s very taxing.

Here’s a list of mappings I just added to my .vimrc


" window
nmap <leader>sw<left>  :topleft  vnew<CR>
nmap <leader>sw<right> :botright vnew<CR>
nmap <leader>sw<up>    :topleft  new<CR>
nmap <leader>sw<down>  :botright new<CR>

" buffer
nmap <leader>s<left>   :leftabove  vnew<CR>
nmap <leader>s<right>  :rightbelow vnew<CR>
nmap <leader>s<up>     :leftabove  new<CR>
nmap <leader>s<down>   :rightbelow new<CR>

Feel free to replace the arrow keys (up, down, left, right) with k, j, h, l if you’re more comfortable with those bindings.

Read Full Post »

I was over at vimcasts and I stumbled upon the episode on Tidying whitespace.

They came up with a function that removes trailing whitespace. Unlike my “homemade” solution, it goes the extra mile by keeping the history clean and putting your cursor back to where it was before you invoked the command.

function! <SID>StripTrailingWhitespaces()
  " Preparation: save last search, and cursor position.
  let _s=@/
  let l = line(".")
  let c = col(".")
  " Do the business:
  %s/\s\+$//e
  " Clean up: restore previous search history, and cursor position
  let @/=_s
  call cursor(l, c)
endfunction

There’s room for improvement, however. Although the Single Responsibility Principle is used to talk about classes, arguably it also applies to functions. That function doesn’t do one thing, it does two (useful) things: saving the “state” and executing a command to remove the trailing whitespace.

Here’s a function that preserves the state:

function! Preserve(command)
  " Preparation: save last search, and cursor position.
  let _s=@/
  let l = line(".")
  let c = col(".")
  " Do the business:
  execute a:command
  " Clean up: restore previous search history, and cursor position
  let @/=_s
  call cursor(l, c)
endfunction

Notice how you can inject the command into that function. Even if Vim does function pointers (to some extent), let’s just punt on this one and pass a string.

Here’s the mapping to strip out trailing whitespace:

nmap _$ :call Preserve("%s/\\s\\+$//e")<CR>

Back on Tidying whitespace, k00pa mentioned in the comments how he modified the original function to perform the same indentation. But, it was a copy and paste(!). With the “Preserve” function, we can turn this into a one-liner.

:call Preserve("normal gg=G")

Read Full Post »

deps: what am I missing?

Over the years, I’ve had to set up my work environment on many computers. By “work environment”, I don’t necessarily mean work-related — just the configuration and customizations I like to get things done.

The first things I do on a new computer is download my dotfiles and symlink everything correctly. Under a minute, I can be up and running with all the software I use configured the way I like it. Then, I usually pick a new prompt color (from 256!) to visually identify what machine I’m logged into.

I share my config files across computers by pushing/pulling to github [1].

For years, I’ve been very happy with this setup.

The Problem

The problem is what happens after. At some point, I’m going to type a command and it’s not going to be there.
screen not found
I forgot to install it… no big deal.

BUT! Now, I’m probably going to emerge --sync, apt_get something, port install whatever — the point is, when I typed that command, I was getting into the flow and now I’m not anymore: I’m probably swearing and/or doing some sysadmin.

A Possible Solution

I had a vision:

deps screenshot

The point of deps is to answer “what am I missing?”.

That screenshot shows the output of deps on two different computers. It’s a simple script but it automates a haphazard process I used to do by hand. I would sit there, try to come up with what I need, get most of it right but miss a few commands.

Of course, you and I probably have very different ideas of what a “work environment” looks and feels like. That’s ok, this script is meant to be customized.

#!/bin/bash

RED=$(tput setaf 1)
GREEN=$(tput setaf 2)

ATTR_RESET=$(tput sgr0)

# ok_failed(cmd, description)
ok_failed() {
  local cmd=$1
  local description=$2

  eval $cmd > /dev/null 2>&1

  if [ $? == 0 ]; then
    echo "[${GREEN}  OK  ${ATTR_RESET}]" $description
  else
    echo "[${RED}FAILED${ATTR_RESET}]" $description
  fi
}

# check_installed(prg_name, cmd_name=prg_name)
check_installed() {
  local prg_name=${1}
  local cmd_name=${2:-$1}

  ok_failed "which $cmd_name" $prg_name
}

check_installed vim
check_installed screen
check_installed git
check_installed awk
check_installed sed
echo
check_installed ruby
check_installed gem
check_installed rake
check_installed cap
echo
check_installed tidy
check_installed xmllint
check_installed spidermonkey js

That’s a snapshot when I wrote this post. The up-to-date version (which will change over time) can be found in my dotfiles.

Notes

[1] It could be anything though, I used to do that with subversion and mercurial before. Also, you might not want to make your files public like I did. Do what’s right for you.

Read Full Post »

Books I read in 2009

I used to keep track of what I wanted to read on a piece of paper. What started out as list that could fit on a Post-it, grew rapidly into a few pages. At one point, I typed everything up on the computer. From text file, to other solutions, I finally ended up writing a web application: Booklife.

At the end of 2008, I had just refactored Booklife and I added events. The main purpose of events was to generate an ATOM feed. However, as 2010 rolled around, I realized that all my reading habits were in the database.

Here’s was I came up with:

January

February

March

April

May

June

July

August

September

October

November

December

Notes

All in all, I read 34 books in 2009. At first, I was surprised by how high the number was. Then, I was surprised that I had not read even more. I guess, over time, I’ll have a better idea of how many books I’m going through in a given period of time.

Of those 34 books, 17 are audio books. That was a surprise, I would have thought it would have been less than that. Also: the fact that it’s exactly 50% is a coincidence. I had argued in the past that audio books were increasing my “book throughput”. Sure, if I had not read these books in audio, I might have been able to squeeze in more real books. At the same time, I am not convinced. I fit audio books in contexts where real books are inconvenient: when I’m cleaning, doing the dishes, in transit.

Read Full Post »

For some reason, I’ve been in a dotfiles refactoring frenzy.

Though I’ve posted about my rake script to do completion in bash before, that was a while back and I’ve improved things since.

capistrano: on github

export COMP_WORDBREAKS=${COMP_WORDBREAKS/\:/}

_check_capfile() {
  if [ ! -e Capfile ]; then
    return
  fi

  local cache_file=".cache_cap_t"

  if [ ! -e "$cache_file" ]; then
    cap -T | awk /^cap / {print $2} > $cache_file
  fi

  local tasks=$(cat $cache_file)
  COMPREPLY=( $(compgen -W "${tasks}" — $2) )
}
complete -F _check_capfile -o default cap

rake: on github

export COMP_WORDBREAKS=${COMP_WORDBREAKS/\:/}

_check_rakefile() {
  if [ ! -e Rakefile ]; then
    return
  fi

  local cache_file=".cache_rake_t"

  if [ ! -e "$cache_file" ]; then
    rake -T | awk /^rake / {print $2} > $cache_file
  fi

  local tasks=$(cat $cache_file)
  COMPREPLY=( $(compgen -W "${tasks}" — $2) )
}
complete -F _check_rakefile -o default rake

There is very little difference between those 2 scripts. In fact, if it wasn’t bash, I would probably refactor this further …

  1. check that the (Cap|Rake)file exists
  2. generate the cache file if it doesn’t exist
  3. use the cache file to do the completion

“rake -T” or “cap -T” will NOT run again until you delete the cache files:

rm_caches() {
  rm -v .cache_* 2>/dev/null
}

Read Full Post »

« Newer Posts - Older Posts »