prevent multiple copies of script running at same time

Dev, DevOps
Single-process servers (and cron jobs) are easier to write.  If a script fires up, then starts processing a job that another copy of the same script is working on, then bad things happen.The traditional solution to this is a bit of a hack.  Script fires up, writes its unique process id (PID) to a magic file somewhere.  When the script dies we delete the file.  This works somewhat, but fails if the script gets impolitely killed -- the "lock" file still exists but the associated script is no longer running.The following trick is useful. When the script fires up it registers itself with a global list of strings held by the OS. If the code can't register itself, it knows the program is already running, and should exit.  When a…
Read More

easy way to maintain source code decorum

Dev
It's best to maintain source code in a clean, pretty state. It's easy to read, and more importantly easy to find small code changes that might lead to dramatic effects.But, during development, people add debugging statements. Print statements, debug logging, and things like "# XXX fix this".  Letting this code hit production is not good, as it confuses the normal operation of the system.To pick bits of debugging fluff from your code, add this to your top-level Makefile.  Before merging your code with the main branch, run "make sniff" to sniff your code for unwantedly smelly bits, like leftover debugging code.The following will only check newly-changed code, and won't warn on old smelly code:sniff: git diff develop | awk '/(debug|XXX)/ { print $$1; found=1; } END { exit found }'If…
Read More

trivial Jenkins configuration

Dev, DevOps
Knowing when your software breaks is useful. If you catch subtle errors soon after the change, it's *much* easier to figure out which bit of code "optimization" broke things for the users.Another benefit is fixing "works for me" syndrome. Even if you're building software just for yourself, it's nice to have a 3rd party to verify that you didn't do silly things. Forgetting to check in South database changes is quite easy, but that simple omission will break everything.Jenkins is a cranky to set up, because it's a collection of moving parts. The best setup would be having Jenkins rebuild your tests every time GitHub detects a source code change.The following setup is not as geeky, but it's much, much easier: tell Jenkins copy local source files and test, every…
Read More

interesting database performance article

Dev, DevOps
Postgres doesn't support that much compression directly. These guys decided to take the performance hit of compressing/decompressing things at the filesystem layer... but it's actually *faster*. Turns out I/O bound stuff like the database, even if you spend more CPU on compression, there are multiplicative effects if the data is smaller.http://citusdata.com/blog/64-zfs-compression
Read More

Profiling Django

Dev, DevOps
Install django-extensionshttp://pythonhosted.org/django-extensions/runprofileserver.html1) sudo pip install django-extensions2) add 'django_extensions' to your app's INSTALLED_APPS list.Run server in profiling modepython  ./manage.py runprofileserver --prof-path=/tmp 8001Do a querytime curl -i http://localhost:8001/account/eventboard/update/Write little reporting moduleprof.py -- given laterRun report/tmp/account.eventboard.update.017442ms.1363891644.prof :         202963 function calls (197801 primitive calls) in 17.443 seconds   Ordered by: internal time, call count   List reduced from 2280 to 5 due to restriction    ncalls  tottime  percall  cumtime  percall filename:lineno(function)    26201   14.956    0.001   14.956    0.001 /usr/lib/python2.7/ssl.py:154(read)        2    0.725    0.363    0.725    0.363 /usr/lib/python2.7/ssl.py:301(do_handshake)      937    0.470    0.001   15.069    0.016 /usr/lib/python2.7/socket.py:406(readline)       10    0.318    0.032    0.318    0.032 /usr/lib/python2.7/socket.py:223(meth)        2    0.238    0.119    1.291    0.646 /usr/local/lib/python2.7/dist-packages/httplib2/__init__.py:982(connect)In this case,…
Read More

command-line JSLint

Dev
Finding syntax errors and so forth is not work for programmers, it's a job for our very fast but dumb slaves, the computers.  Here is how to harness their power for good, by showing possible bugs in our JavaScript programs.install a command-line JavaScript interpreter, Rhinosudo apt-get install rhinoinstall JavaScript checkermkdir -p ~/src/jslint/cd ~/src/jslint/wget https://raw.github.com/douglascrockford/JSLint/master/jslint.jscreate a little wrappermkdir -p ~/bin/echo '#!/bin/bash' >> ~/bin/jslintecho 'rhino -f ~/src/jslint/jslint.js $*' >> ~/bin/jslintAdd ~/bin/ to your PATHTest the checker$ echo 'alert(ding)' >>testme.js$ jslint !$jslint testme.jsjs: uncaught JavaScript runtime exception: ReferenceError: "ding" is not defined.It works! In our sample code, we call "alert(ding)" using an undefined variable.  The code should be "alert('ding')" -- a quoted string.
Read More

notes: Simple Chef with Vagrant

DevOps
(placeholder for Chef & Vagrant article)  install Vagrant and Chef in Ubuntu$ sudo apt-get install vagrant chef $ chef-client -vChef: 10.12.0install Knife$ knife configure$ edit ~/.chef/knife.rbcookbook_path [ './site-cookbooks' ] use Knife to create a Cookbook$ knife cookbook create beer** Creating cookbook beer** Creating README for cookbook: beer** Creating CHANGELOG for cookbook: beer** Creating metadata for cookbook: beerWhen the "beer" cookbook runs, it creates a file in /tmp$ edit site-cookbooks/beer/recipes/default.rbFile.open('/tmp/beer.txt', 'w') {|f| f.write('tastyn') }Tell Vagrant to use Chef, and to run "beer" recipe$ edit Vagrantfile  config.vm.provision :chef_solo do |chef|    chef.cookbooks_path = "site-cookbooks"     chef.add_recipe "beer"  endBoot the VM, thus running chef-client and our beer recipe$ vagrant upVerify it worked$ vagrant ssh -c 'cat /tmp/beer.txt'tasty
Read More

Ssh tunnels are great

Dev, DevOps
At work I'm connecting to multiple Munin statistics servers. On my lappytop I'm only running one server, so how do I get multiple sets of results?  Answer: create a tunnel to another server!The Munin protocol is extremely easy, send "fetch X" to get statistics on X.  In my example df=Disk File usage.  Here's how to get local information, via Munin running locally on port 4949.$ echo 'fetch df' | nc -q1 localhost 4949# munin node at freebeer_dev_sda1.value 5.29318865322941_dev.value 0.0339391645617528_dev_shm.value 0.378827479751794_var_run.value 0.00922469512382616_var_lock.value 0.Here's how to make a remote machine's Munin (on port 4949) show up on localhost (port 4950). This means we can scan multiple local ports to get information on many different machines.ssh -fNL localport/localhost/remoteport remotehostOption "-f" means drop into the background after asking for a password.  Next option "-N"…
Read More

I salute Fabrice Bellard

General
Recently the amazing hack of "boot Linux in a web page" has been circulating around. Bellard wrote a x86 emulator in JavaScript, then took the straight Linux binaries and was able to boot it. In a web page. At reasonable speeds. His distribution even includes a C compiler. This is amazing, but that's not all.Bellard wrote the emulator QEMU, which lets you run one set of software "inside" another. It's mostly used for virtual machines, so you can run multiple operating systems inside a "host" OS. The guests can run Linux or Windows or whatever. The guest virtual machines can even be of different CPU types: ARM on a x86! This can be incredibly useful. But that's not all.A large majority of video software uses the "ffmpeg" library. It's quite…
Read More