Sometimes you want to quickly bring up a high performance EC2 compute cluster with low latency interconnect for prototyping, developing, or benchmarking some custom distributed system/cluster software. When you have multiple such clusters, and you want to stop/start each cluster as a unit, and also perform parallel ssh operations on each one as a unit, the EC2 web console or awscli and regular ssh can become unwieldy.
For this kind of use case, DustCluster can come in handy:
Dustcluster is a command line shell that lets you perform node operations and fast stateful ssh on named clusters of EC2 nodes. (Disclaimer: I’m its primary author)
It now has a plugin command that lets you bring up an EC2 cluster from a minimal spec (node names, instance types, count) and ssh into it with zero configuration. Behind the scenes it generates a fully configured CloudFormation stack based on this high level spec.
rcviz is a small python module for recursive call graph visualisation, which i wrote a few weekends ago. It differs from regular call graph visualisations because i) it shows the recursion tree with each invocation of the function as a different node ii) it also shows the args and return values at each node, and iii) it allows you track and graph the execution of just the function/s you are interested in without affecting or slowing down the rest of the codebase.
It’s probably only useful for visual intuition and debugging of recursive algorithms, not general purpose call graphs. Below, I show the usage and output for a Fibonacci numbers routine, and then a recursive descent parser (parser code due to Dr. Tim Finin from here). For another example, see the quicksort visualizations on rcviz github readme.
* Nuitka compiles python 2.6 and 2.7 into C++ that calls into libpython. It claims to execute the Cpython 2.6 test suite correctly.
* Can be used to speed up programs where majority of the time is spent executing python instructions (as opposed to calling into native libraries or doing i/o).
* Its authors claim a 0 to 258% speedup on pystone micro-benchmarks. Some micro-benchmark figures here.
* Written by Kay Hayen.
* It works out of the box and with zero config. Manual.
* Create an exe from your python code:
$nuitka --exe ga1.py
* Optionally recurses into modules, with module level granularity, with command line switches:
$nuitka --exe --recurse-to=pyevolve ga1.py
* Run the exe instead of the python script:
= Real world use case and benchmark: speeding up pyevolve
Computer science has largely neglected to define a methodology past the release stage for delivering ongoing reliability in continuously available distributed systems (system = hardware + software + operators). A collection of thoughts on this aspect of reliability, and how we can draw analogies from other industries to fill this gap.
= Elements of reliability
The DL385 G1 is one of the first dual-processor dual-core rack servers shipped by HP around 2006. Equipped with Opterons 260 to 285 they are reasonably powerful beasts. Refurbished machines go for about $150 at ebay – this includes the legendary HP SmartArray RAID controller and 10k rpm UltraSCSI disks, dual GiGE ports, Integrated Lights Out (an embedded tcp/ip server for offline diagnostics) – probably the cheapest quad core machine your money can buy, great for your home lab. Power consumption measured at ~300 watts on a multi-threaded test pegging all the CPUs for > 15 minutes. (However, the eight internal fans were running at a mere 20% when I measured this.)
The install can be a hassle because your machine probably will not have an optical drive, doesn’t have enough on-board video ram for recent e.g. Ubuntu 11 installs screens to handle, and since this hardware has been spinning in a datacenter for 6 years something might be broken. Hopefully these notes will save you some time.
1.Workaround for low video ram for install: you can run the centos6 installer in text mode (with some loss of install features) by dropping to the boot: prompt and pass the ‘text’ parameter to the installer kernel as described here.
2.The cciss driver has been part of the stock linux kernel since at least 2.4 (e.g. see 2.4.3 block drivers). Newer HP raid controllers use the hpsa driver that are in the kernel since 2.6.33.