posted in DevOps on 2013-10-21 18:21:09 UTC by Dave Martorana
I wanted a more robust way to track the amount of time building certain parts of our servers takes. As I describe in another post (and want to dramatically expand on in a future post), we use a series of logically separated shell scripts to build our servers from vanilla instances. So here’s a bit of a tip about logging elapsed time.
Here’s a basic shell script:
1 2 3 4 5 6 7
#!/bin/sh START=`date +%s` # Do stuff here END=`date +%s` ELAPSED=$(( $END - $START ))
Seems smart enough - and then I can take the
ELAPSED variable and log it or whatever. But this method only works in whole seconds, and that’s silly. So this is a bit more elaborate, but much more exact:
1 2 3 4 5 6 7
#!/bin/sh START=`date +%s%N` # Do stuff here END=`date +%s%N` ELAPSED=`echo "scale=8; ($END - $START) / 1000000000" | bc`
This is the difference between “2 seconds” and “2.255322 seconds”, which is important to us, since some shell scripts may only take a fraction of a second.
So what’s going on?
First, we change the date format to
%N gets us nanoseconds instead of seconds - as whole numbers, that is. (Note:
%N doesn’t work on BSD variants.) So we divide by 1 billion, and pipe the calculation string through
bc, a unix calculator, and it will return our seconds difference to 8 decimal places.
This changes my output from:
Backup complete in 2 seconds
Backup complete in 2.83732828 seconds
Much better :)