<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>Passing Curiosity: Posts tagged event</title>
    <link href="https://passingcuriosity.com/tags/event/event.xml" rel="self" />
    <link href="https://passingcuriosity.com" />
    <id>https://passingcuriosity.com/tags/event/event.xml</id>
    <author>
        <name>Thomas Sutton</name>
        
        <email>me@thomas-sutton.id.au</email>
        
    </author>
    <updated>2018-05-17T00:00:00Z</updated>
    <entry>
    <title>Koshu tasting at Sake Shop</title>
    <link href="https://passingcuriosity.com/2018/koshu-tasting-sake-shop/" />
    <id>https://passingcuriosity.com/2018/koshu-tasting-sake-shop/</id>
    <published>2018-05-17T00:00:00Z</published>
    <updated>2018-05-17T00:00:00Z</updated>
    <summary type="html"><![CDATA[<p>Most sake is sold and drunk very soon after it is produced but some, <em>koshu</em>, is aged. At this event Leigh Hudson from <a href="https://www.sakeshop.com.au/">Sake Shop</a> presented four koshu (and a mirin) from two producers. The five drinks (shown below from right to left) were:</p>
<p><img src="/files/2018/koshu-tasting/koshu.jpg" alt="The lineup, right to left" /> </p>
<ol type="1">
<li><a href="https://www.sakeshop.com.au/products/senbazuru-aged-daiginjo-900ml">Senbazuru aged daiginjo</a> $188.00</li>
<li><a href="https://www.sakeshop.com.au/products/daruma-masamune-3-year-old-koshu-720ml">Daruma Masamune 3 year old</a> $69.95</li>
<li><a href="https://www.sakeshop.com.au/products/daruma-masamune-5-year-old-koshu-720ml">Daruma Masamune 5 year old</a> $118.95</li>
<li><a href="https://www.sakeshop.com.au/products/daruma-masamune-10-year-old-koshu-720ml">Daruma Masamune 10 year old</a> $189.95</li>
<li><a href="https://www.sakeshop.com.au/collections/mirin/products/fukuraijun-ume-mirin-720ml">Fukuraijun ume mirin</a> $48.95</li>
</ol>
<p><em>Senbazuru aged daiginjo</em></p>
<blockquote>
<p>A light koshu made from daiginjo aged for 10 years. Pale yellow and thin. Sweet melon, juicy pear, and nashi aromas are quite pronounced. A little drier than I expected given the nose. The melon notes are present in the palate. Long finish with hints of honeydew melon.</p>
<p>As it warmed the melon aromas became more prominent, the mouthfeel thicker, and the finish longer. The palate had more ripe orchard fruits.</p>
</blockquote>
<p><em>Daruma Masamune 3 year old koshu</em></p>
<blockquote>
<p>Golden yellow, honey colour. Quite viscous. Honey and ripe apricot notes on first taste. Then savoury notes appear: dried ham and mushroom.</p>
</blockquote>
<p><em>Daruma Masamune 5 year old koshu</em></p>
<blockquote>
<p>Thick, clinging liquid with a rich honey colour.</p>
<p>Richer, more complex nose than the 3 year old with more savoury notes: rich dripping gravy or vegemite. But in the mouth I get notes of dark chocolate and coffee beans! Some acid notes and hints of smoke hiding in the back. Like eating chocolate coated coffee beans.</p>
<p>This is totally unlike the 3 year old and is, by far, my favourite of the drinks today.</p>
</blockquote>
<p><em>Daruma Masamune 10 year old koshu</em></p>
<blockquote>
<p>Very dark, brown mahogany colour. Develops the themes from the 5 year old: chocolate and coffee, touch of caramel sweetness. Like a fruity specialty coffee.</p>
</blockquote>
<p><em>Fukuraijun ume mirin</em></p>
<blockquote>
<p>Tastes like ume. Doesn’t have the somewhat cloying sweetness I find in some umeshu. Nice, but not something I’d seek out.</p>
</blockquote>
<p>I can’t see myself buying anything like the aged daiginjo – it was so similar to an unaged daiginjo that I can’t see past the extra $100+ – or the ume mirin, but I’ll definitely be looking for more like the Daruma Masamune koshu. It can keep company with the bottle of the 5 year old that I bought!</p>]]></summary>
</entry>
<entry>
    <title>Yow! Conference, Sydney 2013</title>
    <link href="https://passingcuriosity.com/2017/yow-developer-conference/" />
    <id>https://passingcuriosity.com/2017/yow-developer-conference/</id>
    <published>2017-01-02T00:00:00Z</published>
    <updated>2017-01-02T00:00:00Z</updated>
    <summary type="html"><![CDATA[<blockquote>
<p>It’s the beginning of a new year so I’m cleaning out some files in
my drafts directory. This post was started on December 13, 2013.</p>
</blockquote>
<ul>
<li>~40 speakers</li>
<li>~440 attendees</li>
<li>three cities</li>
</ul>
<p>YOW! LambdaJam in May was excellent and this was pretty great too. The YOW!
people seem to put on great conferences.</p>
<h2 id="day-one">Day one</h2>
<h3 id="jeff-hawkins-on-machine-intelligence">Jeff Hawkins on machine intelligence</h3>
<p>The day kicked off with Jeff Hawkins (of Palm and Handspring fame) giving a
keynote in which he described the neurologically-inspired approach to machine
intelligence being developed by his current company (<a href="https://groksolutions.com/">Grok Solutions</a>) and
others. The basis of this approach is in building learning systems with many of
the properties of biological intelligence (universality, robustness, etc.) by
modelling them on the operation of neural structures in the <a href="http://en.wikipedia.org/wiki/Neocortex">neocortex</a>.</p>
<p>One of the key points was the use of representations which enable data storage
and processing in ways which are efficient and accurate <em>enough</em> for machine
intelligence. In particular, the use of <em>sparse distributed representations</em>
(SDR) is key to the model of intelligence described. Dense representations
(such as ASCII) use a very small number of bits to represent particular states
but each bit is devoid of semantic information: the state of “bit 3” in an
ASCII character conveys no useful information. An SDR uses many more bits, each
representing a particular feature in the learning domain (e.g. a property of
objects or a word in a corpus); as such, most bits in a particular SDR instance
will be 0 (hence the “sparse” in the name).</p>
<p>SDRs have several properties which make them useful for learning tasks: similar
objects have similar representations; they allow sub-sampling without losing
all meaning; they behave well with union/membership and other set operations
(an SDR is, in some sense, similar to a <a href="http://en.wikipedia.org/wiki/Bloom_filter">Bloom filter</a>). According to Jeff:</p>
<blockquote>
<p>“All intelligent machines will be based on sparse distributed representations.”</p>
</blockquote>
<p>The <em>cortical learning algorithm</em> developed by Grok Systems and implemented in
the <a href="http://numenta.org/">Numenta Platform for Intelligent Computing</a> open source project
(GPLv3) builds on these ideas and implements a learning system modelled on a
cortical region to learn about “normal” inputs and then predict and detect
anomalies from streaming input. Jeff described two applications in which this
software has been deployed: monitoring and detecting anomalies in monitoring
server metrics, and natural language processing.</p>
<p>The first example (built by Grok Systems and included in the NuPIC open source
project) is used to monitor metrics from resources in Amazon Web Services and
to detect anomalies in their behaviour. This approach can identify conditions
which traditional (and, it must be said, much, much simpler) threshold-based
approaches cannot.</p>
<p>The second example – developed by <a href="http://www.cept.at/">CEPT Systems</a> – derives SDRs of words
from Wikipedia pages and then deploys these SDRs in particular learning
problems. This can be used to demonstrate the set-like properties of SDRs:
sdr(apple) - sdr(fruit) = sdr(computer). A CLA trained on inputs like “ANIMAL
VERB OBJECT” was able to make sensible predictions for new inputs it hadn’t
seen before, including “fox” and “eat” yielding “rodent”.</p>
<p>This was a pretty great talk and got the conference off to a great start!</p>
<h3 id="charles-nutter-on-language-engineering-for-the-jvm">Charles Nutter on language engineering for the JVM</h3>
<p>In the second session I saw <a href="http://blog.headius.com/">Charles Nutter</a>’s talk “Beyond JVM” in which
he discussed the engineering issues which face JVM-targeting languages like
<a href="http://jruby.org/">JRuby</a>. Charles discussed some of the pros and cons for targeting the JVM
(many of the pros <em>are</em> also cons) and then jumped into four of the key
challenges faced by the JRuby project: startup time, native interoperability,
language performance, and the lack of flexibility in the JVM (the big ball of
C++).</p>
<p>Charles discussed a number of ways to improve JVM and application <strong>startup
time</strong>: tweaking JVM flags helps, but can be fragile in the face of different
JVMs, JVM version changes, and typically impact later performance; keeping
persistent JVM instances (using tools like <a href="http://www.martiansoftware.com/nailgun/">Nailgun</a>) can be cause
problems cleaning up resources (memory leaks, background threads, etc);
pre-loading JVMs with tools like <a href="https://github.com/flatland/drip">Drip</a> can improve performance while
avoiding the cleanup problems with persistent JVMs.</p>
<p>The problem of <strong>native interoperability</strong> is a complex one with a range of
solutions. The traditional approach used JNI which is horrible: you write code
for both your intention (“I want to call getpid()”) <em>and</em> how to implement it.
The JNR project provide a real foreign function interface on the JVM structured
into a number of layers: jffi provides platform-specific FFI functionality,
jnr-ffi defines structures, etc. to interface with jffi, jnr-posix exposes a
range of POSIX APIs (the ones JRuby have needed so far) and jnr-constants
defines a range of constants as defined on the host platform, and jnr-enxio
implements Java NIO for arbitrary file descriptors (allowing a range of I/O
functionality which can’t otherwise be expressed on JVM). JNR generates code
which is as direct as possible for each particular case, resulting in very low
overheads for each call.</p>
<p>One of the key motivations for JRuby is <strong>language performance</strong>. While the JVM
specification made mention of non-Java languages, it didn’t go out of it’s way
to actually support them. The relatively new <code>invokedynamic</code> bytecode allows
language implementers to customise invocation mechanisms to suit the specifics
of their language. The JVM will cache and optimise the results of dynamic
invocations as normal. This can result in plain ruby code run on JRuby being
faster than using a native extension under CRuby (redblack tree benchmark).</p>
<p>Finally, Charles discussed approaches that language implementors can use to
deal with the <strong>inflexibility of the JVM internals</strong>. The <a href="http://openjdk.java.net/projects/graal/">Graal project</a>
allows language implementors to customise the way that their implementations
are optimised and emit the ASM/HotSpot intermediate representation appropriate
for the particular language’s constructs. Truffle, a framework built on top of
Graal, allows you to implement an interpreter for your language (structured and
annotated in a particular way) and to automatically derive a JIT for it. (This
sounds a little like the second Futamura projection to me.)</p>
<p>This talk was very well presented and very informative. If I’d known it was
“about” JRuby I probably wouldn’t have gone but I’m glad I did!</p>
<h3 id="julien-verlaguet-on-facebooks-static-typing-for-php">Julien Verlaguet on Facebook’s static typing for PHP</h3>
<p><a href="https://www.facebook.com/julien.verlaguet">Julien Verlaguet</a> is an engineer at Facebook and spoke about the work
they’ve done to improve on the PHP language with <a href="http://www.hiphop-php.com/">HHVM</a> and “Hack” - a
statically typed version of PHP which was the primary subject of the talk.</p>
<p>Contrary to Facebook’s earlier attempts at improving the deployment and runtime
story for PHP (the HipHop compiler translated PHP code into C++ which compiled
into a native binary), HHVM is a fairly traditional virtual machine with a JIT.
The <a href="http://www.hiphop-php.com/blog/">HHVM blog</a> has a bunch of interesting posts about the development of
the VM and the JIT both, go read it!</p>
<p>HHVM supports two source languages: normal PHP and Hack. Hack (the code name
might change) is a statically typed variant of PHP which is compatible with
PHP, uses the same run-time representations within the VM and was designed for
incremental adoption (a necessity when dealing with massive codebases like
Facebook.com).</p>
<p>The static typing for Hack requires that the programmer add type annotations to
class members, function parameters and return values and infers all other
types. The types supported include the basic types built-in to PHP, collections
and generics. It also distinguishes the types of nullable and non-nullable
values. PHP was not designed for type checking, so the type checker must make
several allowances. The most interesting is, perhaps, the delay of type
unification to call sites rather than function definitions.</p>
<p>The Hack type checker is implemented as a daemon which listens for file system
events on the code base and communicates with a client to “run” a check and
present errors. The errors are designed to give specific, useful feedback to
the programmer including references to each annotation which resulted in the
error (“it tells a story”). The checker is also able to output coloured
“coverage” style reports of code showing which code is checked/unchecked.</p>
<p>Conversion of existing PHP to Hack has happened in two ways: organic adoption
by developers as they and their teams take up Hack; and automatic conversion
using tools to analyse, refactor and monitor changes in the code base. This
includes support for “soft” conversions, which are monitored but not enforced
until they are known to be accurate.</p>
<p>Hack and HHVM sound like great improvements over PHP. I never got around to
trying HPHP before it went away but perhaps I’ll give HHVM a go.</p>
<h3 id="kevlin-henney-deconstructed-the-solid-principles">Kevlin Henney deconstructed the SOLID principles</h3>
<p><a href="http://kevlin.tel/">Kevlin Henney</a></p>
<p>I’m not really one for talks about methodologies and such, but Kevlin’s talk
“the <a href="http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)">SOLID</a> Design Principles Deconstructed” was entertaining and not a
little informative.</p>
<h3 id="gilad-bracha-on-dart-and-newspeak">Gilad Bracha on Dart and Newspeak</h3>
<p><a href="http://bracha.org/">Gilad Bracha</a> is an engineer at Google where he works on Dart. He spoke
about Dart and Newspeak.</p>
<h3 id="joe-albahari-on-concurrency-in-.net">Joe Albahari on concurrency in .NET</h3>
<p><a href="http://www.albahari.com/">Joe Albahari</a> spoke about concurrency in C# 5.</p>
<h3 id="scott-hanselman-on-the-web-platform">Scott Hanselman on the web platform</h3>
<p><a href="http://www.hanselman.com/">Scott Hanselman</a> works on Azure and ASP.NET for Microsoft.</p>
<h2 id="day-two">Day Two</h2>
<h3 id="philip-wadler-reprised-the-first-monad-tutorial">Philip Wadler reprised the first monad tutorial</h3>
<p><a href="http://homepages.inf.ed.ac.uk/wadler/">Philip Wadler</a></p>
<h3 id="aaron-bedra-on-behaviour-and-reputation-based-security-controls">Aaron Bedra on behaviour and reputation based security controls</h3>
<p><a href="http://aaronbedra.com/">Aaron Bedra</a></p>
<h3 id="sam-newman-on-microservice-architecture">Sam Newman on microservice architecture</h3>
<p><a href="http://blog.magpiebrain.com/">Sam Newman</a></p>
<h3 id="functional-programming-in-industry">Functional programming in industry</h3>
<p><a href="http://korny.info/">Kornelis Sietsma</a>, <a href="http://www.michaelneale.net/">Michael Neale</a> and <a href="http://twitter.com/jedws">Jed Wesley-Smith</a>
gave a set of three talks about the adoption and use of functional programming
languages at three different companies.</p>
<h3 id="jay-fields-on-adopting-clojure">Jay Fields on adopting Clojure</h3>
<p><a href="http://jayfields.com/">Jay Fields</a></p>
<h3 id="daniel-spiewak-on-modules-and-the-expression-problem">Daniel Spiewak on modules and the expression problem</h3>
<p><a href="http://www.codecommit.com/blog/">Daniel Spiewak</a></p>
<h3 id="stewart-gleadow-on-mobile-app-and-their-apis">Stewart Gleadow on mobile app and their APIs</h3>
<p><a href="http://www.stewgleadow.com/">Stewart Gleadow</a></p>
<h2 id="sponsors-and-exhibitors">Sponsors and Exhibitors</h2>
<p>Sponsors include Suncorp, DiUS, ThoughtWorks, Mashery,</p>]]></summary>
</entry>
<entry>
    <title>Events in August 2014</title>
    <link href="https://passingcuriosity.com/2017/august-2014-events/" />
    <id>https://passingcuriosity.com/2017/august-2014-events/</id>
    <published>2017-01-02T00:00:00Z</published>
    <updated>2017-01-02T00:00:00Z</updated>
    <summary type="html"><![CDATA[<blockquote>
<p>It’s the beginning of a new year so I’m cleaning out some files in
my drafts directory. This post was started on August 18, 2014.</p>
</blockquote>
<p>In an effort to blog more regularly and to post content that is more mine
than transcriptions of others, I’ll be writing a regular round-up of the
events I attend. To start with, here are a few from the last three weeks.</p>
<h2 id="australian-openstack-user-group">Australian OpenStack User Group</h2>
<p>I’ve recently started working at <a href="http://www.anchor.net.au/">Anchor Systems</a> and sit next to some
of the people working on the new <a href="http://www.anchor.com.au/blog/category/cloud-computing/">Anchor OpenStack cloud</a>, so I’ve
been hearing (and overhearing) quite a bit about OpenStack and OpenStack
deployment recently. One of the things that sounded quite interesting is
“OpenStack On OpenStack” (or “Triple O”) – which uses an existing OpenStack
cloud to bootstrap and manage a new OpenStack cloud. The recent <a href="http://www.meetup.com/Australian-OpenStack-User-Group/events/189477362/">Australian
OpenStack User Group</a> at the Anchor Systems offices in Sydney had
several talks about Triple O.</p>
<h2 id="sydney-postgresql-user-group">Sydney PostgreSQL User Group</h2>
<p>I’ve always quite liked <a href="http://www.postgresql.org/">PostgreSQL</a> but my current project is my first
chance to work with it in a while. The <a href="http://www.meetup.com/Sydney-PostgreSQL-User-Group/events/197696352/">Sydney PostgreSQL User
Group</a> meeting last week had a presentation by Venkata B Nagothi
from Fujitsu Australia about the changes in the forthcoming PostgreSQL 9.4
release.</p>
<h2 id="port80-sydney">Port80 Sydney</h2>
<p>I was a regular attendee at Port80 in Perth for years, but haven’t been to
<a href="http://www.meetup.com/Port80-Sydney/events/192062222/">Post80 Sydney</a> since I moved here 18 months ago. The meeting last
week was hosted at Anchor, so I stuck around after work to eat some pizza and
listen to the talk about user experience design.</p>]]></summary>
</entry>
<entry>
    <title>Linux.conf.au 2016 round-up</title>
    <link href="https://passingcuriosity.com/2016/lca2016-round-up/" />
    <id>https://passingcuriosity.com/2016/lca2016-round-up/</id>
    <published>2016-02-08T00:00:00Z</published>
    <updated>2016-02-08T00:00:00Z</updated>
    <summary type="html"><![CDATA[<p>This year’s <a href="https://linux.conf.au/">Linux.conf.au</a> was held at Deakin University’s
waterfront campus in Geelong, Victoria and work sent me and a few of
my colleagues. There’s a lot of material in up to six concurrent
tracks over five days but here are the things I particularly liked.</p>
<figure>
<img src="/files/2016/lca2016/deakin-640.jpg" alt="Deakin University waterfront campus" />
<figcaption aria-hidden="true">Deakin University waterfront campus</figcaption>
</figure>
<p>Two talks – <em>Continuous Delivery using blue-green deployments and
immutable infrastructure</em> and <em>The Twelve-Factor Container</em> – had
some interesting, though not entirely new, things to say about CI/CD
and reliable, sustainable build and operations. If containers and
infrastructure as code are your thing, they might be worth watching.</p>
<p>The two talks about Swift (the OpenStack object storage system, not
the banking system, the programming language, the bird, etc., etc.)
gave a high-level overview of their approach to sharding, metadata
storage, and erasure codes. Sticking with “putting data in places”,
Bron Gondwana from Fastmail described <em>Twoskip</em>, a single-file
database format based on skip-lists they built for use in their email
infrastructure. The <em>Dropbox Database Infrastructure</em> talk had some
interesting detail about tooling around a very large MySQL system.</p>
<p>I really enjoyed the talks from a few people associated with
NICTA/Data 61/CSIRO/UNSW about formal methods, the eChronos real-time
embedded operating system kernel, the SMACCM project, etc. The
functional programming miniconf had some very accessible talks on some
foundational topics (viz. parametric polymorphism, Church encodings,
and “you can actually write production software in Haskell”).</p>
<figure>
<img src="/files/2016/lca2016/pier-640.jpg" alt="The Penguin dinner was held at The Pier" />
<figcaption aria-hidden="true">The Penguin dinner was held at The Pier</figcaption>
</figure>
<p>I saw two talks on security topics which might be useful (or at least
entertaining) for those of us who aren’t specialists. <em>Using Linux
features to make a hacker’s life hard</em> described a number of things
you can do to a Linux system to make it difficult for attackers to
exploit your systems (for the adversary’s point of view see
<a href="https://www.youtube.com/watch?v=o5cASgBEXWY"><em>Ain’t No Party Like A Unix Party</em></a> from 2013). <em>Playing to lose</em>
described approaches to thinking about security which will probably be
useful to people designing, building, and operating systems
(i.e. almost all of us).</p>
<figure>
<img src="/files/2016/lca2016/bell-640.jpg" alt="Genevieve Bell delivering her keynote" />
<figcaption aria-hidden="true">Genevieve Bell delivering her keynote</figcaption>
</figure>
<p>Finally the stand-out talks of the conference for me were two of the
keynotes. Catarina Mota spoke about open source, open hardware, and
the newer open materials and open technologies movements. The open
source architecture projects she described made me want to build a
house. The last day of the conference opened with a keynote by
Genevieve Bell – anthropologist, Intel Fellow, and VP of Corporate
Strategy at Intel – about themes that will likely dominate the way
our technologies create ‘the future’. If you watch only one video from
the conference I’d suggest <a href="https://www.youtube.com/watch?v=QqADuKyBNMc" title="Genevieve Bell's linux.conf.au 2016 keynote">make it this one</a>!</p>
<p>Most of the videos from the five days of sessions are already
available on the <a href="https://www.youtube.com/user/linuxconfau2016">Linux.conf.au 2016 Youtube channel</a>. No matter
what you’re interest in you’ll probably find something good in there.</p>
<figure>
<img src="/files/2016/lca2016/apostles-640.jpg" alt="The Twelve Apostles" />
<figcaption aria-hidden="true">The Twelve Apostles</figcaption>
</figure>
<p>Our flights back to Sydney were at 1600 so a few of us jumped in a
hire car (great idea Ramon!) and went to see the
<a href="https://en.wikipedia.org/wiki/The_Twelve_Apostles_(Victoria)">Twelve Apostles</a>. It was well worth the few hours in the car!</p>]]></summary>
</entry>
<entry>
    <title>Puppet Camp Sydney 2014</title>
    <link href="https://passingcuriosity.com/2014/puppet-camp-sydney/" />
    <id>https://passingcuriosity.com/2014/puppet-camp-sydney/</id>
    <published>2014-02-11T00:00:00Z</published>
    <updated>2014-02-11T00:00:00Z</updated>
    <summary type="html"><![CDATA[<p><a href="http://puppetlabs.com/community/puppet-camp">Puppet Camps</a> are regular, regional events for the Puppet community and
this is the second or third time I’ve attended one. They can feel a <em>tiny</em> bit
vendor-y (this should be unsurprising) but the quality of the talks and the
attendees is pretty good, in my experience.</p>
<p><strong>Nigel Kersten</strong>’s keynote talk was aimed at a pretty broad audience (a bit of
Puppet, what’s driving uptake, etc.) but also described some of the new
features in components included in the next release (IIRC) of Puppet
Enterprise. I was particularly interested to learn about <a href="http://docs.puppetlabs.com/puppet/latest/reference/ssl_autosign.html#policy-based-autosigning">policy based
auto-signing</a> and <a href="http://docs.puppetlabs.com/puppet/latest/reference/lang_variables.html#trusted-node-data">trusted node data</a> in Puppet 3.4+, <a href="http://docs.puppetlabs.com/guides/custom_facts.html#external-facts">external facts</a>
in Factor 1.7+, more readable ouput from Hiera 1.3+, and the news that
Puppet Labs will be supporting some of their modules from the <a href="http://forge.puppetlabs.com/">forge</a>.</p>
<p><strong>Peter Leschev</strong> from Atlassian described the process of introducing and
developing “infrastructure as code” in the Atlassian build engineering team. He
describe their introduction of a number of tools and measures and the impact on
confidence in infrastructure changes being made. It was interesting to see the
journey of adding code reviews, Puppet, Vagrant-based development (with
Veewee), behaviour based testing (with Cucumber), continuous integration
(Bamboo and Vagrant), profiling (Puppet’s <code>--evaltrace</code> flag), automated
deployment (to staging) and notification (in HipChat). Later on I wished I’d
asked if the graphs of confidence in his slides were from measurements, or for
illustrative purposes only.</p>
<p><strong>Lindsay Holmwood</strong> from Bulletproof described the <a href="http://puppetlabs.com/community/puppet-camp#previous">Flapjack</a> monitoring
system – which seems pretty cool – and how you’ll be able to configure it
with Puppet (when he releases the Puppet module). The architecture of Flapjack
looked pretty interesting and I plan to have a play with it this weekend.</p>
<p><strong>Rene Medellin</strong> spoke about NAB’s move to push some of their workloads into
“the cloud” (AWS). They used Puppet as part of their SEO machine image building
process <em>and</em> in deployment as one of their monitoring and compliance tools.
Lots of Jenkins and automated building of AMIs and CloudFoundry templates and
such.</p>
<p><strong>Aaron Hicks</strong> from Landcare Research NZ spoke about the way he uses Puppet in
a scientific research environment. Particularly interesting was the use of
Puppet to formalise the configuration of the many, many precious snowflake
machines used in the various research projects his organisation supports. The
idea of supplying Puppet manifests to help in the replication of scientific
computing sounds great.</p>
<p><strong>James Dymond</strong> and <strong>John Painter</strong> from Sourced Group described a series of
“Puppet in the AWS cloud” architectures they’d developed for clients in their
consulting engagements. Most interesting was their fourth (I think) solution,
where they implemented a “gateway” between AWS autoscaling notifications and
Puppet, allowing the master to sign certificates, delete node reports, etc. as
the AWS autoscaling system adds and removes nodes.</p>
<p><strong>Matt Moor</strong> from Atlassian described the way they use Puppet to manage their
SaaS offering. Each SaaS client has their own VM which, now, is managed using
Puppet. This allows them to manage service and version dependencies much more
reliably than their previous approach of building massive WAR files using Maven
and managing them with hack-y shell scripts.</p>
<p>The last talk was by <strong>Chris Barker</strong> from Puppet Labs who gave a product
demonstration of Puppet Enterprise. I’d already used most of the features
demoed but some of the newer stuff – especially the <a href="http://puppetlabs.com/presentations/introducing-puppet-enterprises-event-inspector">event inspector</a> –
looked pretty cool.</p>
<p>Puppet Camp Sydney 2014 was a great event and brought to mind again just how
much fun operations work (what little I’ve done) can be. In time, I expect the
slides and videos of the presentations will be available from the Puppet Labs
web-site on the <a href="http://puppetlabs.com/community/puppet-camp#previous">Previous Puppet Camps</a> page.</p>]]></summary>
</entry>
<entry>
    <title>Inaugural Sydney Elasticsearch Meetup</title>
    <link href="https://passingcuriosity.com/2013/sydney-elasticsearch-meetup/" />
    <id>https://passingcuriosity.com/2013/sydney-elasticsearch-meetup/</id>
    <published>2013-11-18T00:00:00Z</published>
    <updated>2013-11-18T00:00:00Z</updated>
    <summary type="html"><![CDATA[<p>The inaugural <a href="http://www.meetup.com/Elasticsearch-Sydney-Meetup/events/149068632/">Sydney Elasticsearch meetup</a> at Atlassian (who provided the
space, beer, and pizza) featured two talks:</p>
<ul>
<li><p><a href="http://tesser.org">Sharif Olorin</a> from Anchor systems spoke about monitoring Elasticsearch
clusters; and</p></li>
<li><p><a href="https://twitter.com/clintongormley">Clinton Gormley</a>, a core developer, gave an overview of the changes in
the impending 1.0 release.</p></li>
</ul>
<h2 id="sharif-olorin-on-monitoring-elasticsearch">Sharif Olorin on Monitoring Elasticsearch</h2>
<p>Sharif is a developer and system administrator at Anchor Systems and has been
working with Elasticsearch for about a year.</p>
<p>He highlighted a number of key points to be considered by anyone who is
monitoring an Elasticsearch cluster (in no particular order):</p>
<p>You should <strong>monitor every metric</strong> that you can get your hands on and keep as
much data for as long as you can, just in case. Sharif described several cases
where having data available made debugging problems observed in production much
easier.</p>
<p>While you should <em>monitor</em> everything, you should <strong>only alter on metrics
people care about</strong>. Being woken up at 3AM is pretty bad, but it’s worse when
the cause is not really a problem! Loss of redundancy, for example, probably
isn’t worth getting out of bed for; provisioning a new node can wait until
morning.</p>
<p>You should monitor and <strong>alert from your entire cluster</strong>, not just from some
node’s individual opinion about the whole cluster. There are a number of
problem conditions that can be difficult to accurately detect without having a
“whole cluster” view. Whole-cluster monitoring, though, doesn’t play nicely
with most host-based monitoring tools; you’ll probably need to define your own
custom checks which know how to interrogate the whole cluster.</p>
<p>Given that they’ll be checking every node in a cluster, these checks will need
to be highly concurrent and very fast. Sharif showed us a split brain check he
wrote in Golang.</p>
<p>You should <strong>automate recovery</strong> from as many alert conditions as you can.
Where it’s not possible to automatically <em>recover</em> from an error condition, you
should aim to <em>respond</em> sensibly. Sharif described an example, in which an
alert trigger by a split-brain in the cluster might automatically switch all
nodes into read-only mode to prevent divergence.</p>
<p><strong>Use the statistics API</strong> as a source of many useful metrics about your nodes
and their opinions about the cluster. It also exposes a bunch of generic stuff
about the JVM and things.</p>
<p>Finally, Sharif gave a few tips about <em>tuning</em> Elasticsearch clusters. The core
of his advice boiled down to taking a principled approach: tuning individual
parameters, reproducing each test as closely as possible (same time of day,
etc.), consider <em>actual numbers</em> (not just graphs), etc. Essentially: use
science!</p>
<p><a href="https://speakerdeck.com/fractalcat/monitoring-elasticsearch-for-fun-profit-and-not-getting-woken-up-at-3am">Sharif’s slides</a> are available on Speaker Deck; I’ve probably got a bunch
of stuff wrong here, so you should probably go and review them for yourself!</p>
<h2 id="clinton-gormley-on-elasticsearch-1.0">Clinton Gormley on Elasticsearch 1.0</h2>
<p>Clinton works for Elasticsearch where he develops the Perl client libraries,
does training and evangelising, “keeps Elasticsearch honest” and some other
stuff. He gave us a run down on the new features and other improvements in the
forthcoming Elasticsearch 1.0 release.</p>
<p>Amongst the many things mentioned, these few stuck out to me:</p>
<p>You cannot currently use different versions of Elasticsearch in the same
cluster; upgrades involve tearing down your entire cluster and bringing up a
new one (possibly not in that order). 1.0 will allow <strong>rolling upgrades</strong> of
your nodes without having to do the whole cluster in one fell swoop.</p>
<p>You <em>can</em> <strong>backup</strong> the data in current Elasticsearch clusters, but it’s very
much a do-it-yourself process: disable flushing, find all primary shard
locations, copy their files, enable flushing. Version 1.0 will provide API
methods to trigger a snapshot which will be written to a configured
<em>repository</em> (S3, HDFS, etc.) Comparable changes have been made to the process
of <strong>restoring</strong> a snapshot: the current manual process will be replaced with a
few API calls.</p>
<p>The <strong>percolator</strong> functionality – which allows applications to do things like
reverse search, alerts, and updatable result sets – is now implemented in a
way which lets it scale as well as any other index in the cluster. It also
supports multiple indices, aliases, and has a bunch of other improvements.</p>
<p>The new <strong>cat API</strong> provides direct access to a range of metrics in
human-friendly formats (i.e. not large JSON documents). This includes a bunch
of things that humans and monitoring systems are often curious about: sizes,
counts, statuses, etc.</p>
<p>The existing support for facets has been drastically improved with a new
feature called <strong>aggregations</strong>. These allow you to express a bunch of things
which aren’t really expressible with traditional facets. These looked very
powerful and very cool!</p>
<p><a href="https://speakerdeck.com/elasticsearch/new-features-in-elasticsearch-v1-dot-0">Clinton’s slides</a> (originally prepared by Igor Motov) are available on
Speaker Deck; go read them!</p>
<p>The questions prompted a few interesting details: Elasticsearch have just hired
a few machine learning people to work on the product; they aren’t yet sure
what, specifically, they’ll be working on, but we can expect some learning-type
things in Elasticsearch soon.</p>]]></summary>
</entry>
<entry>
    <title>Sydney Continuous Delivery Meetup</title>
    <link href="https://passingcuriosity.com/2013/continuous-delivery-sydney-meetup/" />
    <id>https://passingcuriosity.com/2013/continuous-delivery-sydney-meetup/</id>
    <published>2013-10-30T00:00:00Z</published>
    <updated>2013-10-30T00:00:00Z</updated>
    <summary type="html"><![CDATA[<h2 id="atlassian-stash">Atlassian Stash</h2>
<p>Matthew Watson leads the Stash team at Atlassian. Released a minimum viable
product <em>really</em> early, then very short delivery cycles; provide incremental
improvements to both deliver features to customers but also to validate the
product (“we <em>will</em> catch up”).</p>
<p>Agile with git. Use feature branching to allow for isolated development with
functional and performance testing of <em>your</em> code as you develop it. Isolate
stable code from work-in-progress.</p>
<p>Code reviews (aka pull requests) to help ensure higher quality. Needed to
inculcate a respect for quality in the team; absolutely required for continuous
deployment.</p>
<p>Every commit in master results in two builds and pushes the result to the
“dog-fooding” deployment. Also runs a bunch of other checks against these
successful builds.</p>
<p>Whenever a feature branch is created, CI server creates a new “CI plan” for
that branch. These feature branch jobs are quite highly optimised compared to
the master.</p>
<p>Several build styles:</p>
<ul>
<li><p>“Checks” builds use checkstyle, findbugs, API compatibility, licensing, link
check docs. This is pretty fast; 4-5 minutes.</p></li>
<li><p>“Master” builds, every green build is deployed to dog-fooding instance.
Parallel; distribution, functional tests (40 minutes), jsunit, jshint, DB
migration, REST and hosting testing, first run, unit tests. Major gatekeeper
(~11 workers)</p></li>
<li><p>“Dependent” builds: database matrix (12 versions of 5 DBs), git version
matrix (12 versions), plugins, source, git on Windows, hosting and REST on
Windows. Triggered when “master” succeeds; so, e.g., Windows errors don’t
prevent getting code into master.</p></li>
<li><p>“Feature branch” builds include “checks”, Stash distribution, unit tests,
jshint; these are all pretty fast (~4 agents, 3-4 minutes; fast feedback to
developers). Developers can trigger second stage with all the other test from
“master” if they think they’re required.</p></li>
</ul>
<h3 id="release-branching">Release branching</h3>
<p>Stash needs to support multiple releases:</p>
<ul>
<li><p>master</p></li>
<li><p>release branches (2.5, 2.6); lives on with bug fixes, etc.</p></li>
<li><p>merge bug-fixes back toward and into master (in 2.5, merge into 2.6, then into
master). Plugin to do this sort of stuff automatically, create pull requests
for merge conflicts.</p></li>
</ul>
<p>Master and release branches all get full build plans.</p>
<h3 id="release-build">Release build</h3>
<p>Fair number of builds made internally.</p>
<ul>
<li><p>Stash team have internal dog-fooding instance.</p></li>
<li><p>Atlassian has a corporate instance used by other teams.</p></li>
</ul>
<p>Use a release job, parameterised by a specific commit (not necessarily the head
of a branch). Pass in version and next-version for Maven. The build job runs
<code>maven release:prepare release:perform</code>. Not much testing because it’s already
been through all the testing.</p>
<p>Create a temporary branch for the process of the release build.</p>
<h3 id="automated-deployments">Automated deployments</h3>
<p>Release artefacts for customers:</p>
<ul>
<li>www.atlassian.com, developers.atlassian.com</li>
<li>marketplace</li>
<li>Checkup (for third-parties to test their plugins)</li>
<li>Go live</li>
</ul>
<p>Internal deployments:</p>
<ul>
<li><p>Dev staging</p></li>
<li><p>Staging (same data as stash.atlassian.com) for smoke tests, etc.</p></li>
<li><p>Production</p></li>
</ul>
<h3 id="performance-testing">Performance testing</h3>
<p>Six-thousand seat licenses with lots of CI running against it. Performance is
pretty important.</p>
<p>Daily monitoring of performance of operations. Check results in stand ups.
Notice regressions in performance.</p>
<h3 id="techniques">Techniques</h3>
<p>Try to always be ready for release - quality code.</p>
<p>Automate testing as much as possible.</p>
<p>Automate processes:</p>
<ul>
<li><p>Releases</p></li>
<li><p>Deployment</p></li>
</ul>
<h2 id="automated-environment-provisioning">Automated environment provisioning</h2>
<p>David Cheal is Chief Engineer at <a href="http://krunchtime.it">Krunchtime IT</a>. They do
AWS solution design, build and delivery; devops approach; agile; etc.</p>
<p>Life cycles are changing</p>
<ul>
<li><p>Traditional ops environments</p></li>
<li><p>Virtualisation for cost efficiencies</p></li>
<li><p>Cloud</p></li>
</ul>
<p>None of the problems we had at (1) are still there at (3), we’ve just given
that problem to Amazon.</p>
<h3 id="continuous-delivery-infrastructure">Continuous delivery infrastructure</h3>
<p>Deploy infrastructure often, automation, etc. Help to detect and prevent
configuration drift between environments, etc.</p>
<h3 id="options-for-infrastructure-agility">Options for infrastructure agility</h3>
<ul>
<li><p>CloudFormation with baked AMIs</p></li>
<li><p>Automation tools like Puppet or Chef to automate configuration and deployment.</p></li>
</ul>
<h3 id="separation-of-concerns">Separation of concerns</h3>
<p>Code deployments are not infrastructure deployments. Leave application and
dependencies to infrastructure deployment.</p>
<h3 id="continuous-challenges">Continuous challenges</h3>
<p>Technological challenges: Windows, anything from MS, learning curves</p>
<p>Cultural change: change, resistance, etc.</p>
<h3 id="example">Example</h3>
<p>Communities can be a guide; Puppet Forge is a guide but they <em>must</em> be reviewed.</p>
<h3 id="pragmatism">Pragmatism</h3>
<blockquote>
<p>Infrastructure is code.</p>
</blockquote>
<p>This is not true; infrastructure is infrastructure.</p>
<h3 id="opportunity">Opportunity</h3>
<p>Embrace new methods.</p>
<p>Incredible opportunity to deliver what business, operations, and developers
need more quickly, reliably, cost effectively, etc.</p>
<h3 id="demonstration">Demonstration</h3>
<p>Simple Ruby on Rails application using AWS CloudFormation to spin up EC2
instances, RDS database, ELB, etc. Use an existing Puppet master.</p>
<p>Most organisations use a naming convention which allow Puppet to determine
which classes should be applied to each server, with the right environment.</p>
<p>Using ELB, most organisations will include a specific “health check” page in
their application. Makes it very easy to have ELB.</p>
<p>Continuous infrastructure delivery is – when done right – really, really
boring.</p>
<h3 id="cloudiness">Cloudiness</h3>
<p>Elasticity lets you scale according to demand. “Deal of the day” sites 2-15-2
machines. Development infrastructure shutdown overnight and weekends.</p>
<p>Immutable, disposable infrastructure. Cows vs dogs.</p>]]></summary>
</entry>
<entry>
    <title>FP-Syd, October 2013</title>
    <link href="https://passingcuriosity.com/2013/fp-syd-october/" />
    <id>https://passingcuriosity.com/2013/fp-syd-october/</id>
    <published>2013-10-16T00:00:00Z</published>
    <updated>2013-10-16T00:00:00Z</updated>
    <summary type="html"><![CDATA[<p>Little mention of <a href="http://linux.conf.au/">linux.conf.au 2014</a> and how we should all take a
look at the available programme and see if we want to go.</p>
<h2 id="eriks-icfp-roundup">Erik’s ICFP roundup</h2>
<p>The <a href="http://www.icfpconference.org">International Conference on Functional Programming</a> (ICFP for short) is
a three-day core conference and is collocated with a number of related events.
<a href="http://www.icfpconference.org/icfp2013/">ICFP 2013</a> was in Boston and a number of FP-Syd regulars presented and/or
attended.</p>
<p>Erik was a long time LCA attendee but ICFP has supplanted it as his “must go”
conference. I hope to make the same switch in <a href="http://www.icfpconference.org/icfp2014/">2014</a>!</p>
<p>The <strong>Haskell Implementers Workshop</strong> covers the internals of Haskell
implementations which, these days, means GHC to a very large extent. Covers a
lot of interesting techniques, with a particular focus on compilers. Erik
mentioned work on the non-safety of generalised <code>newtype</code> deriving; using
Hermit (a dynamic/guided optimisation framework) to optimise
scrap-your-boilerplate code; and Habit (a strict Haskell dialect for OS
programming).</p>
<p>The <strong>Commercial Users of Functional Programming</strong> was, reportedly, a bit
boring, but I’ve liked the few <a href="http://www.youtube.com/channel/UCfSUv7I_aHgzcnXMcd8obsw">CUFP 2013 YouTube videos</a> I’ve watched so
far. YMMV.</p>
<p>The <strong>Haskell Symposium</strong> was a main draw (for Erik). Highlights which Erik
found worth mentioning and I found worth noting down include:</p>
<ul>
<li><p>Oleg asked difficult questions of a lot of speakers. I wonder what would
happen if he asked an easy one?</p></li>
<li><p>Effects seemed something of a hot topic.</p></li>
<li><p>Demonstrations of <a href="http://hackage.haskell.org/package/liquidhaskell">Liquid Haskell</a> (which sounds pretty great), and a
Javascript backend for GHC.</p></li>
</ul>
<ul>
<li><p>Intel are developing a research compiler which uses GHC’s front-end to
compile to Core and then uses their own backend. Does loop vectorisation,
currently only better performance on a few benchmarks.</p></li>
<li><p>The third iteration of the I/O manager for GHC. Multithreaded, influenced by
Kazu Yamamoto’s work on Warp and mighttpd. Benchmarks against Nginx seem very
good; Warp with multiple cores sees extremely good speedups (contra Nginx).</p></li>
</ul>
<p>The main event – <strong>ICFP</strong> – is an academic conference and a lot of the
content will fly straight over the head of many a “working programmer”. Some of
the highlights included:</p>
<ul>
<li><p>A few talks on vectorisation (w/ SIMD from Intel, stream fusion, etc.) and
optimisation (for GPUs, etc.)</p></li>
<li><p>A few talks on dependent types.</p></li>
<li><p>Tactics in Coq are untyped; one talk discussed an approach to typed tactic
programming in Coq. Sounds especially interesting now that there is a “Coq
fight” in the FP-Syd calendar for next year!</p></li>
<li><p>People who didn’t attend are encourage to watch the video of the “fun with
semi-rings” talk. I haven’t been able to find it, though.</p></li>
<li><p>One talk described a useful-sounding approach to parsing context free
grammars with a divide-and-conquer approach, allowing partial and parallel
parsing.</p></li>
<li><p>Simon Peyton-Jones discussed the new curriculum for secondary computer
science education in the United Kingdom.</p></li>
<li><p>An extension or two to System F: System Fc (explicitly kind equality) and
System Fi (type indices). Everyone who can understand System F shouldn’t have
a problem reading the System Fi paper.)</p></li>
<li><p>Constrained monad problem (which, apparently, Oleg said was crap?). Paper on
solving a problem which occurs when using <code>Monad</code> but they should have used
<code>Applicative</code>. Seems as though they mostly wanted the <code>do</code> syntactic sugar;
see also idiom brackets and the attempt to generalise the Monad sugar.</p></li>
<li><p>“Querying ordered graphs.” Three words which sound interesting, but I’ve no
idea why I wrote them down.</p></li>
<li><p>Also: experience reports! Someone took a Scheme compiler from 4-5 to 25
passes (“nanoparsing”?) and, at the same time, also added a good colouring
register allocator. Apparently one of these changes made it better.</p></li>
</ul>
<p>Other events:</p>
<ul>
<li><p>Talking about a benchmark/framework to compare approaches to generic
programming at the Workshop on Generic Programming.</p></li>
<li><p>Brent Yorgey doing animations with <code>diagrams</code> at the Workshop on Functional
Art, Music, Modeling and Design.</p></li>
<li><p>Chordify is a system (written in Haskell) to analyse recordings and generate
chord transcripts. It’s not perfect but gives pretty good approximations.</p></li>
</ul>
<h2 id="ben-talking-about-data-flow-fusion">Ben talking about Data Flow Fusion</h2>
<p><a href="http://www.cse.unsw.edu.au/~benl/">Ben Lippmeier</a> – an FP-Syd regular – presented a paper at ICFP and
reprised that presentation back in Sydney for those of use who weren’t in
Boston. He described an approach using data flow to guide the compilation of
programs using stream fusion.</p>
<p>Wants to process a list of points, adding 1 to each, filtering those about 0
and also finding the maximum.</p>
<p>Doing stream fusion</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="fu">map</span> f <span class="ot">=</span> unstream <span class="op">.</span> mapsS f <span class="op">.</span> stream</span>
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a><span class="fu">filter</span> f <span class="ot">=</span> unstream <span class="op">.</span> filterS f <span class="op">.</span> stream</span>
<span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-4"><a href="#cb1-4" aria-hidden="true" tabindex="-1"></a><span class="co">-- RULE to remove (stream . unstream)</span></span></code></pre></div>
<p>Example computes <code>(vec3, n)</code> can’t float <code>vec3</code> because it’s being used in the
result <em>and</em> in the computation of <code>n</code>. So we get two loops.</p>
<pre><code>**1** -&gt; 2 -&gt; **3** -&gt; 4
	              |      |
              (    ,    )</code></pre>
<p><code>zipWithX</code> tends to use X+1 loop counters for stream fusion. There’re only 8
registeres to use on some platforms.</p>
<h3 id="data-flow-fusion">Data Flow Fusion</h3>
<h4 id="slight-manual-refactor">Slight manual refactor</h4>
<p>Split <code>filter</code> into two combinators <code>flag</code> – which contains <code>True</code> or <code>False</code>
for each member – and <code>pack</code> – which does the filtering.</p>
<h4 id="extract-the-data-flow-graph">Extract the data flow graph</h4>
<p>This code generates the data flow graph.</p>
<pre><code>fun vec1 (\s1 -&gt; 
  let s2    = map (+ 1) s1
      flags = map (&gt; 0) s2
  in mkSel flags (\sel -&gt;
  let s3   = pack sel s2
      vec3 = create s3
      n    = fold max 0 s3
  in (vec3, n)))

vec1 :: Vector Int
s1 :: Series k1 Int
s2 :: Series k1 Int
flags :: Sel k1 k2
s3 :: Series k2 Int</code></pre>
<p>Series has a phantom type variable which helps keep track of the code which can
be fused into a single loop.</p>
<p>We learn that <code>k1 &gt;= k2</code></p>
<p>With the flow graph (annotated with operations, etc.), throw away the source.</p>
<h4 id="schedule-the-grapch-into-an-abstract-loop-nest">Schedule the grapch into an abstract loop nest</h4>
<p>Abstract loop nest:</p>
<pre><code>loop k1 {

  start: ....
  
  body: ....
  
  inner: ...
  
  end: ...

} yields ...</code></pre>
<p>Start at the front of the data flow graph and add elements of the graph to the
nested abstract loop.</p>
<p>Operations go into different places in the nested abstract loop. A <code>fold</code>, for
example, allocates and accumulator in <code>start</code>, increments somewhere within
<code>body</code> and reads it in <code>end</code>.</p>
<h4 id="extract-implementation-from-abstract-loop-nest.">Extract implementation from abstract loop nest.</h4>
<p>Translate the various bits and pieces of the abstract loop nest data structure
into different Haskell combinators.</p>
<h3 id="implementation">Implementation</h3>
<p>GHC plugin which grabs Core, does data flow compilation and generates Core to
give back to GHC.</p>
<p>Some issues in current implementation where LLVM doesn’t realise that writing
to the output doesn’t <em>need</em> to reload the start and length numbers.</p>
<blockquote>
<p><strong>If</strong> your program is first order (argument functions take scalars,
not series), non-recursive, synchronous, finite data flow program
using out combinators.</p>
<p><strong>Then</strong> by construction your program will be compiled correctly by
this system.</p>
</blockquote>
<h2 id="liam-on-cdsl">Liam on CDSL</h2>
<p>Liam O’Connor works for NICTA. Instead of talking about something he recently
learned, he’s talking about work: CDSL - a restricted functional language for
file system verification.</p>
<p>Trying to establish a formal proof of the correctness of a file system driver
in an operating system.</p>
<p>Already have an architecture for this sort of problem (from seL4):</p>
<ol type="1">
<li><p>Abstract spec - high-level, nondeterministic (followed by an “interesting”
proof of relation to ~ 15%)</p></li>
<li><p>Low level spec - purely functional (followed by a “largely boring” proof of
relation to ~ 30%)</p></li>
<li><p>C implementation - efficient.</p></li>
</ol>
<p>~ 55% is showing that the other proofs don’t do something stupid; proving
invariants all hold.</p>
<p>Ignoring the kernel proper, architecture support, and drivers (another NICTA
project), the largest part of the Linux kernel is the <code>fs/</code> directory; 31
different file systems were supported by the kernel running on some random
NICTA server.</p>
<p>There are lots of file systems with, one assumes, quite a lot of common
functionality and infrastructure. The goal of the project is not to make a
cathedral of a single verified file system, more a factory for churning out
numerous file systems. The approach is to use a DSL to generate the low-level
spec, proof and implementation. High-level spec and proof are done by hand, so
generated outputs need to be readable.</p>
<p>Should</p>
<ul>
<li><p>establish key verification properties</p></li>
<li><p>compete with efficient C code (imperative, destructive updates, etc.)</p></li>
<li><p>be expressive enough to write a file system</p></li>
</ul>
<p>But:</p>
<ul>
<li>doesn’t need to express <em>everything</em> in a file system. Hand-written components
could be plugged in to the DSL (and, hopefully, re-used).</li>
</ul>
<h3 id="simply-typed-lambda-calculus">Simply-typed lambda calculus</h3>
<p>Simple-typed lambda calculus is strongly normalising (you can’t write general
recursion, e.g. the Y combinator).</p>
<p>First-order language: lambdas go away, use <code>let</code> binding and restrict to
defining top-level functions. Added structural rules for mixing, weakening, ?</p>
<p>Need to do memory management which is safe, expressive (no pass by value, we
need the heap), no GC (you’d have to verify it, introduce latency, etc.)</p>
<p>Automatic member management (GC) is too big a burden. Many static automatic
memory management is inefficient or unsafe.</p>
<p>What about manual memory management?</p>
<div class="sourceCode" id="cb5"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a><span class="kw">let</span> x <span class="ot">=</span> allocateData ()</span>
<span id="cb5-2"><a href="#cb5-2" aria-hidden="true" tabindex="-1"></a>    x' <span class="ot">=</span> updateData x</span>
<span id="cb5-3"><a href="#cb5-3" aria-hidden="true" tabindex="-1"></a>    _ <span class="ot">=</span> free x</span>
<span id="cb5-4"><a href="#cb5-4" aria-hidden="true" tabindex="-1"></a><span class="kw">in</span> x'</span></code></pre></div>
<p>But this is terrible! Unsafe, inefficient, etc.</p>
<p>So have a linear type system, throwing away weakening, etc. Forces use of
things exactly matching (can’t alloc and not use, doesn’t discharge the new
fact). The typing rules require that introduction and elemination be paired.</p>
<p>Linear types means that the elimination operations (e.g. <code>updateDate</code>) are the
<em>last</em> to access terms, so they can do destructive updates.</p>
<p>Two interpresations of these semantics:</p>
<ul>
<li><p>value semantics: pass by value, no heap, immutability, reasoning.</p></li>
<li><p>update semantics: heap, updates, deallocates, implementation.</p></li>
</ul>
<p>Linear types allow for both.</p>
<p>But sometimes you want non-linear, pass-by-value (arithmetic operations, etc.):</p>
<ul>
<li>Unboxed types, ints, small structs</li>
<li>Functions themselves</li>
</ul>
<p>Allow structural rules (dereliction and contraction) for certain types only. So
now we have <code>T_{.}</code> and <code>T_{#}</code> (unboxed and value types).</p>
<h3 id="buffer-interface">Buffer interface</h3>
<pre><code>make : () -&gt; .Buf
free : .Buf -&gt; ()
length : .Buf -&gt; (#U32, .Buf)

serialise : (.Obj, .Buf) -&gt; (.Obj, .Buf)
deserialise : .Buf -&gt; (.Obj, .Buf)</code></pre>
<p>Non-linear “look but don’t touch” references with <code>*</code>:</p>
<pre><code>make : () -&gt; .Buf
free : .Buf -&gt; ()

length : *Buf -&gt; #U32
serialise : (*Obj, .Buf) -&gt; .Buf
deserialise : *Buf -&gt; .Obj</code></pre>
<p>Use <code>let!</code> construct which is like <code>let</code> but we mark specific variables as
read-only within the <code>let</code> clauses and back to linear in the <code>in</code>.</p>
<p>But this is unsafe (read-only can escape the let). Could use regions, but
choose not to unless it’s required.</p>
<p>Linear typing breaks some control flow:</p>
<div class="sourceCode" id="cb8"><pre class="sourceCode haskell"><code class="sourceCode haskell"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a><span class="kw">let</span> x <span class="ot">=</span> alloc ()</span>
<span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a><span class="kw">in</span> <span class="kw">if</span> cond</span>
<span id="cb8-3"><a href="#cb8-3" aria-hidden="true" tabindex="-1"></a>   <span class="kw">then</span> update(x)</span>
<span id="cb8-4"><a href="#cb8-4" aria-hidden="true" tabindex="-1"></a>   <span class="kw">else</span> x</span></code></pre></div>
<h3 id="loops">Loops</h3>
<p>Hardest, most annoying part of the formalisation of the language.</p>
<p>Built-in loop combinators, map, fold, with, for.</p>
<pre><code>let sum = for (x,y) in fold(arr) with 0
              do (x + y)

let arr', sum = for (x,y) in map(arr) with 0
                    do (x * 2, x + y)</code></pre>
<p>Alas, this is unsafe. Double free, etc. But you can restrict linear types in
the loop expression. Then have to make any required linear types into
accumulator parms.</p>
<h3 id="error-handling">Error handling</h3>
<p>The return-code convention using in languages like C is pretty bad. Instead,
separate statements and expressions.</p>
<p>Statements have three types:</p>
<ul>
<li>s : <span class="math inline">\({\bar T_{s}}\)</span></li>
<li>s : <span class="math inline">\(fails {\bar T_{f}}\)</span></li>
<li>s : <span class="math inline">\({\bar T_{?}} fails {\bar T_{?}}\)</span></li>
</ul>
<p>Type of <code>if then else</code> is <code>T_{t} \leastupperbound T_{e}</code>. Lattice join,
subtype, etc.</p>
<p>Make <code>let</code> and <code>let!</code> only handle success cases. Force sub-expressions to
handle potential errors. Type system <em>forces</em> you to handle your errors and the
<em>linear</em> type system forces you to free your resources.</p>
<h3 id="types">Types</h3>
<p>Product and sum types (implemented as structs and tagged unions).</p>
<p>Accessing members of linear records is problematic as you use the record
multiple times:</p>
<pre><code>let sum = operation(x.field1, x.field2)</code></pre>
<p>Instead use an open/close structure.</p>]]></summary>
</entry>
<entry>
    <title>FP-Syd, August 2013</title>
    <link href="https://passingcuriosity.com/2013/fp-syd-august/" />
    <id>https://passingcuriosity.com/2013/fp-syd-august/</id>
    <published>2013-08-28T00:00:00Z</published>
    <updated>2013-08-28T00:00:00Z</updated>
    <summary type="html"><![CDATA[<p>Here are some nodes from the August 2013 meeting of the <a href="http://fp-syd.ouroborus.net/">FP-Syd</a>
functional programming group.</p>
<h2 id="julian-gamble-on-simulation-testing-in-datomic">Julian Gamble on Simulation Testing in Datomic</h2>
<p><a href="http://juliangamble.com">Julian Gamble</a> ([<span class="citation" data-cites="juliansgamble">@juliansgamble</span>][] on Twitter) gave
his first FP-Syd talk with an introduction to simulation testing using [Datomic][].</p>
<p><span class="citation" data-cites="juliansgamble">[@juliansgamble]</span>: http://twitter.com/juliansgamble
[Datomic]: http://www.datomic.com</p>
<blockquote>
<p>Plug: He’s writing a book called <a href="http://clojurerecipes.net/">Clojure Recipes</a> which is due out in January 2014.</p>
</blockquote>
<p><a href="https://github.com/Datomic/simulant">Simulant</a> – the subject of the talk – is a framework for Datomic database.
It’s for <em>simulation testing</em>.</p>
<p>Many types of testing (in something resembling order of popularity):</p>
<ul>
<li>Unit testing</li>
<li>User acceptance testing</li>
<li>Performance testing</li>
<li>Simulation testing</li>
</ul>
<p>Simulation testing uses modeling and simulation to “test” systems which are too
complex for linear models like unit testing. Generations of simulations:</p>
<ul>
<li><p>High school solving maths problems</p></li>
<li><p>Stock analysts modelling and analysing companies</p></li>
<li><p>Analytics driven audits simulating systems for comparison.</p></li>
<li><p>Business scenarios predicting responses to, e.g., market crashes.</p></li>
</ul>
<p>Most of these can be done on a piece of paper or on a single machine, but
systems which aren’t amenable to such approaches are becoming more common.</p>
<p><a href="http://www.amazon.com/Purely-Functional-Structures-Chris-Okasaki/dp/0521663504">Chris Okasaki’s book Purely Functional Data Structures</a> popularised the
use of purely functional approaches to data structures through sharing.</p>
<p>Datomic is “a database as a value”. Or, put another way, a database as a
persistent data structure. This makes state management easier for, e.g.,
reproducing problems for bug fixing.</p>
<p>Built on pluggable storage system. Uses a Java-native store locally, can use
Amazon Dynamo DB. Writing is done through a single transactor process with
querying done directly from the data store.</p>
<p>Simulant is a framework which uses Datomic to help to distribute and scale
simulation testing. Assumes that you’ll be modelling <em>agents</em> and <em>actions</em> –
which are stored in the Simulant schema – and additonal model details stored
in your own schema. Uses git too, to keep track of version of the simulation
changing over time.</p>
<h3 id="process">Process</h3>
<ol type="1">
<li><p>Develop a Datomic schema for your model. This will be used to record the
generic details of the simulation – the actions performed by the agents – and
the domain specific details.</p></li>
<li><p>Set the model parameters (stocks/prices, etc. or ants/food/world size)</p></li>
<li><p>Make statistical assertions about the system. These will be verified against
the data recorded during the simulation.</p></li>
</ol>
<p>There are more details to this, but they flew past and I couldn’t get them down.</p>
<h3 id="why-datomic">Why Datomic?</h3>
<p>Being persistent (in the “persistent data structures” sense), Datomic makes it
far easier to review old data from older simulations, add additional
statistical assertions, etc. without having to jump through the many and varied
hoops you’d need for, e.g., a relational database.</p>
<p>I’m not sure how true a comparison this is, given that Datomic forces all
writes to the database through the single transactor. A similar architecture
with a relational database could quite easily use a single transactor to
enforce timestamp consistency on data being recorded. I must be missing
something.</p>
<h3 id="applicability">Applicability</h3>
<ul>
<li><p>Non-trivial system with multiple agents.</p></li>
<li><p>Datomic’s database as value, thing.</p></li>
<li><p>Where you have statistical assertions to be evaluated.</p></li>
</ul>
<h2 id="shane-stephens-on-web-animations">Shane Stephens on Web Animations</h2>
<p>Works on the <a href="http://w3.org/TR/web-animations/">web animations specification</a> for the W3C. Unifies SVG and
CSS animations on the web.</p>
<p>The web animations specification defines a Javascript API which looks something
like this:</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode javascript"><code class="sourceCode javascript"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="kw">new</span> <span class="fu">Animation</span>(</span>
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a>	<span class="bu">document</span><span class="op">.</span><span class="fu">getElementById</span>(<span class="st">'hello'</span>)<span class="op">,</span></span>
<span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a>	[ {<span class="st">&quot;left&quot;</span> <span class="op">:</span> <span class="st">&quot;200px&quot;</span>}<span class="op">,</span></span>
<span id="cb1-4"><a href="#cb1-4" aria-hidden="true" tabindex="-1"></a>	  {<span class="st">&quot;left&quot;</span> <span class="op">:</span> <span class="st">&quot;400px&quot;</span><span class="op">,</span> <span class="st">&quot;height&quot;</span> <span class="op">:</span> <span class="st">&quot;100px&quot;</span>}]<span class="op">,</span></span>
<span id="cb1-5"><a href="#cb1-5" aria-hidden="true" tabindex="-1"></a>	<span class="dv">1</span></span>
<span id="cb1-6"><a href="#cb1-6" aria-hidden="true" tabindex="-1"></a>)<span class="op">;</span></span></code></pre></div>
<p>This talk isn’t about “generating a functional API for web animations” but he
thought it was two weeks ago. He tried to generate bindings, but failed.
Instead, it’s a discussion about the attempt and the result.</p>
<p>I think there might be animations of yak shaving involved.</p>
<h3 id="haskell-to-js-compilers">Haskell to JS compilers</h3>
<p>There are quite a few functional languages which target Javascript and they
all, in Shane’s opinion, hate the web.</p>
<h4 id="utrecht-haskell-compiler-javascript-backend">Utrecht Haskell Compiler JavaScript backend</h4>
<p>The <a href="https://github.com/UU-ComputerScience/uhc">UHC</a> Javascript backend has little documentation, claims to “compile
most of Hackage” and provides an FFI to interact with “native” Javascript code.</p>
<p>The barrier between Haskell and Javascript is the problem. Everything on the
web “platform” is exposed with APIs in Javascript. Having a UHC-JS generate a
blob of HTML and CSS and Javascript stuff is pretty hard to compose with other
web-ish things.</p>
<p>There’s a big impedance mismatch between Haskell and Javascript.</p>
<h4 id="elm">Elm</h4>
<p><a href="http://elm-lang.org/">Elm</a> is a functional reactive programming language which compiles to
Javascript. Lots of documentation, an online editor, and it already has
animations.</p>
<p>But Elm is another “replace the world” abstraction.</p>
<h4 id="roy">Roy</h4>
<p><a href="http://roy.brianmckenna.org/">Roy</a> has a much saner approach, largely just syntactice sugar around
Javascript:</p>
<ul>
<li>Javascript functions are available</li>
<li>Roy types are almost Javascript “types”</li>
</ul>
<p>But no ADTs, etc. Because JS is pretty shitty with no recursion, etc.</p>
<h4 id="krazy">krazy</h4>
<p>So with no “good” existing languages he started his own language called krazy.</p>
<ul>
<li><p>The current implementation is a PEG parser and interpreter in Javascript.</p></li>
<li><p>Functional types are Javascript types (lists, for example, really are Javascript arrays).</p></li>
<li><p>Supports ADTs, HOFs, pattern matching, etc.</p></li>
<li><p>JS interop “constrained” by type assertions.</p></li>
<li><p>Will probably add record with optional, structural typing.</p></li>
</ul>
<h3 id="animations">Animations</h3>
<p>Back to the web animations API.</p>
<p>The web animations specification has side-effect free constructors for
animations, effects, timing groups, etc.</p>
<p>This could be exposed to library authors and used as an interface or to
generate an interface automatically? I’m not sure.</p>
<h2 id="thomas-sewellts-on-learnings-about-sat">[Thomas Sewell][ts] on learnings about SAT</h2>
<p><a href="http://ssrg.nicta.com.au/people/?cn=Thomas+Sewell">Thomas Sewell</a></p>
<blockquote>
<p>Survey: who can name an NP-complete problem?</p>
</blockquote>
<p>NP-complete problems can be solved by a non-deterministic machine but the
solutions can be checked by a deterministic machine. In essence, they are very
hard to solve but easy to check.</p>
<p>Circuit satisfiability can be encoded in SAT.</p>
<p>The SAT problem attempts to assign values to logical variables in a formula in
conjunctive normal form and produces either a set of assignments (if the
formula is satisfiable) or “no” (if there is no assignment).</p>
<p>The DPLL algorithm is pretty naive and does lots of backtracking.</p>
<p>The CDCL algorithm – discovered in the 90s – increased the size of viable
problems to millions of variables. Instead of having to “re-learn” the same
pieces of information repeatedly when backtracking, the Conflict Driven Clause
Learning algorithm tracks the “cause” of a clause you learn and, when a
contradiction is derived, it learns the inverse of it’s parent.</p>
<p>E.g.</p>
<blockquote>
<p>If we reach contratiction, and the parents are <span class="math inline">\(x_{1}\)</span>, <span class="math inline">\(\neg x_{2}\)</span>, <span class="math inline">\(x_{12}\)</span>.
Then we need to learn <span class="math inline">\(\neg x_{1} \vee x_{2} \vee \neg x_{12}\)</span> as at least one
of the assumptions are false, so the negation of their disjunction must hold.</p>
</blockquote>
<h3 id="learnings">Learnings</h3>
<ul>
<li><p>Competitions - progress</p></li>
<li><p>Fast propagation - a modern SAT solver needs a very efficient implementation
of the propagation algorithm.</p></li>
<li><p>Locality - solvers make decisions “near” previous decisions. Need a heuristic
to find “nearby” variables for choice.</p></li>
<li><p>Phases - alternate between phases focussed on SAT and un-SAT phases.</p></li>
<li><p>Pruning - prune the database of clauses periodically to speed propagation.</p></li>
<li><p>Glue - Not sure what this means?</p></li>
<li><p>Rewriting - preprocessing the problem into an equisatisfiable problem. Make
the problem “better”, works well as a first step. Useful on problems like
CPUs problems.</p></li>
</ul>
<p>Lots of problems have nice and/or useful SAT encodings.</p>
<p>NP-complete problems were, in the not too distant past, primarily useful as a
polite “no” for managers. (You can’t have your cake and eat it too.)</p>
<h3 id="sat-with-proofs">SAT with Proofs</h3>
<p>Some solvers produce a resolution proof.</p>
<p>Reverse Unit Propagation of a proof is a services of clauses that can be
learned by unit propagation only. The conflict clauses of a CDCL solver in the
order they are learned form a RUP proof.</p>
<p>DRUP adds clause deletion, to speed up unit propagation.</p>
<p>Having useful proofs with rewriting is complex. Checking that a SAT proof for a
rewritten problem is tricky; dealing with the rewriting (incorporating it into
the proof and validating the rewriting is often as complex as the SAT problem
itself, etc.)</p>
<h3 id="motivations">Motivations</h3>
<p>Have some SMT proofs and would love to check them in HOL4 or Isabelle/HOL.
Satisfiability Modulo Theories (SMT) incorporates SAT as part of it. HOL4 and
Isabelle/HOL are highly trusted but very slow. Using SMT/SAT to solve a problem
quickly and Isabelle/HOL to replay and verify the result should result in a
fast, trusted proof.</p>
<p>There are SAT replay tools that do this sort of thing, but they were all pretty
or extremely slow. Turns out millions of variables are hard in more traditional
tools.</p>]]></summary>
</entry>
<entry>
    <title>Sydney Devops Meetup, August 2013</title>
    <link href="https://passingcuriosity.com/2013/sydney-devops-meetup-august/" />
    <id>https://passingcuriosity.com/2013/sydney-devops-meetup-august/</id>
    <published>2013-08-15T00:00:00Z</published>
    <updated>2013-08-15T00:00:00Z</updated>
    <summary type="html"><![CDATA[<p>Here are some nodes from the <a href="http://www.meetup.com/devops-sydney/events/117291642/">August 2013 Sydney Devops Meetup</a>.</p>
<h2 id="artur-ejsmont-on-release-management-at-yahoo7">Artur Ejsmont on release management at Yahoo!7</h2>
<p>Artur is a Senior Software Engineer at Yahoo!7. I think he said he’s on the
platforms team? The environment within the team is rather different environment
than many others – much more in common with release engineering and system
administration than in other roles.</p>
<p>Everything is released and deployed as packages using a suite of tools and
formats developed with the Yahoo! empire. Packages include (almost) everything:
PHP source code, crontabs, configurations, etc.</p>
<p>Release descriptions (CMR) include:</p>
<ul>
<li>package versions and clusters</li>
<li>conf and cron changes</li>
<li>database and process management</li>
</ul>
<h3 id="joined-team">Joined Team</h3>
<p>When he joined the team, the 5 members were responsible for 180 packages
(committing to 1-2 dozen packages in an average sprint).</p>
<p>There was a lack of visibility in not only the state of various packages
(deployed versions, build and test status, etc.) but even which packages there
<em>are</em> (commited to SVN but never made it into the package repository).</p>
<p>Problem with packages lingering without stable releases. Wanted to be able to
recreate environments, etc. but dependencies not being promoted to stable can
make it a pain in the arse to track down specific versions.</p>
<ul>
<li>Uncertainty what has to be released</li>
</ul>
<p>A great deal of manual work to assemble change management requests for
releases. Two days of work at the end of each sprint, trawling through
documentation, trackers, SVN, etc.</p>
<p>Ten different application clusters with different versions of different
packages on each.</p>
<ul>
<li>Manual testing of int stage prod</li>
</ul>
<p>Perception was that the team was doing way too much manual work.</p>
<p>Constantly searching for information in disparate sources; repos, code,
trackers, wikis, etc.</p>
<p>Ecosystem is too complex.</p>
<p>Too many moving parts &amp; chances to screw things up.</p>
<h3 id="vision">Vision</h3>
<p>Provide visibility</p>
<blockquote>
<p>I don’t want to guess, nor search.</p>
</blockquote>
<p>Automate</p>
<blockquote>
<p>Do it for me or telll me what to do next.</p>
</blockquote>
<p>Data aggregation</p>
<p>Single point of entry for Bugzilla, svn, ci, dist, CMR tool, etc.</p>
<p>Provide metrics</p>
<h3 id="development">Development</h3>
<p>Built it over Christmas period.</p>
<ol type="1">
<li><p>Automated job to prcoess entire SVN repo, discover packages and generate 190
static HTML reports.</p></li>
<li><p>Second release using MySQL.</p></li>
</ol>
<h3 id="package-list">Package List</h3>
<p>List of 190 packages. Sort by: CI state (broken at top), release state (commits
but no version released), package created (but not deployed everywhere yet), up
to date.</p>
<p>Provides information including:</p>
<ul>
<li><p>Version numbers (svn trunk, newest in package repo, oldest in production)</p></li>
<li><p>“Score” (higher is worse) so it can fudge things by priority.</p></li>
<li><p>Links to various sources of information (related CMRs, SVN, CI, repo)</p></li>
</ul>
<p>Rollup</p>
<ul>
<li>Healthy</li>
<li>Pending</li>
<li>Unhealthy</li>
</ul>
<h3 id="cmr-builder">CMR Builder</h3>
<p>Interrogates various data sources:</p>
<ul>
<li>SVN</li>
<li>Igor (server role manager)</li>
<li>Repository (dependencies)</li>
<li>Deployments</li>
</ul>
<p>Assemble changelogs, etc.</p>
<p>Some packages are based on old CVS repositories, need crazy date-based logic to
build a diff.</p>
<h3 id="dependencies">Dependencies</h3>
<p>Dependencies between packages are really annoying; lots of dependencies between
packages. 10 major applications, 190 packages. Only a few packages are
relatively independent.</p>
<p>Provides overview of dependencies:</p>
<ul>
<li>List of packages required by this package</li>
<li>List of packages which require this package</li>
</ul>
<h3 id="metrics">Metrics</h3>
<p>Metrics to tell:</p>
<ul>
<li><p>How are things? Good or bad?</p></li>
<li><p>How are things changing? Getting better?</p></li>
</ul>
<p>Lag-Score includes a range of factors (tests failing, production versions,
etc.) which tries to combine all the factors. Plotted, making very little
progress on this over 6 months.</p>
<h3 id="questions">Questions</h3>
<p>Why a custom packaging tool?</p>
<blockquote>
<p>It was invented at Yahoo! before there were existing tools like dpkg,
rpm, etc. Lots of tools to manage, e.g., 40,000 servers involved in
Yahoo! Mail.</p>
<p>Given the tools and scale, it probably won’t be going away.</p>
</blockquote>
<p>Release notes: if it’s bullshit, why not kill it completely?</p>
<blockquote>
<p>It’s an embedded part of the environment and culture of this team and
other teams. Also: comes from global.</p>
<p>CMRs provide communication channel between teams and sysadmins. It’s a
heavy process, and are trying to make it more lightweight, but safety
is important.</p>
</blockquote>
<p>How fast do you go?</p>
<blockquote>
<p>About two release windows a week.</p>
<p>Sprints are about 3 weeks, but not religious about it.</p>
<p>SCRUM-ish, but no product owner, etc. so only ish.</p>
</blockquote>
<p>Have you got your tool into other teams?</p>
<blockquote>
<p>New version is in use by three or four more teams.</p>
<p>Internal presentation, now crawling all the things. Using maintainer
information to group stuff into teams.</p>
</blockquote>
<p>Are all envrionments managed in the same way?</p>
<blockquote>
<p>Yeah, it’s all controlled using the same tools.</p>
</blockquote>
<p>Reproducing production in staging for incident response?</p>
<blockquote>
<p>Easy using the role-based server management system.</p>
</blockquote>
<p>Configuration management in packages?</p>
<blockquote>
<p>Packages declare the configuration options they have.</p>
</blockquote>
<p>More</p>
<blockquote>
<p>Command to override value for a configuration parameter declared by a
package.</p>
<p>Changes to databases aren’t managed, managed manually. Sometimes have
to make schema changes backward compatible and run before hand, etc.</p>
</blockquote>
<h2 id="james-gorman-on-plain-old-services">James Gorman on Plain Old Services</h2>
<p>A lot of this is about James having the shits with the way they do things at
Yahoo!7 and on the web in general.</p>
<p>Working in Java, metric shit ton of frameworks. JBoss got deprecated.</p>
<blockquote>
<p>Everything you can do with Tomcat is an awful hack.</p>
</blockquote>
<p>Found data intensive server container. Based on Jersey but simple. Also:
focussed on the web. Architecture three tier architecture.</p>
<p>Want more asynchronous: message queues, etc. Decoupling. Wrote a thing that
does this. Similar architecture but more ways of asking for things to be done
(cron, message queues, etc.)</p>
<blockquote>
<p>I don’t recommend anyone ever write server middleware.</p>
</blockquote>
<h2 id="peter-ericson-on-erlang-and-elixr">Peter Ericson on Erlang and Elixr</h2>
<p><a href="http://www.erlang.org">Erlang</a> is erlang; <a href="http://elixir-lang.org">Elixir</a> is a ruby-ish language which compiles directly
to Erlang bytecode.</p>
<p>Elixir Dynamo is a web framework for Elixir. Scaffolding, etc.</p>
<p>See <a href="https://bitbucket.org/pdericson/erlang_future">example code</a>.</p>
<h2 id="sergey-guzenkov-on-the-red-hat-summit">Sergey Guzenkov on the Red Hat Summit</h2>
<p>Been to the US for the <a href="http://www.redhat.com/summit/">RedHat summit</a> last
month.</p>
<p>They’ll be releasing a major new version of <a href="http://www.redhat.com/products/enterprise-linux/rhn-satellite/">Red Hat Satellite</a> (their
management thing) building on Puppet, Foreman, Katello, Pulp, Candlepin.</p>
<p>RHEL7 release is delayed. It’ll be based on Fedora 19 and the beta is due in
December 2013. The 7.0 release is expected early next year. Replacing MySQL
with MariaDB; adding MongoDB, nodejs; upgrading a bunch of programming
languages; systemd. Will include client and server support for pNFS – an
extension of NFS to be parallel.</p>
<h2 id="shaun-domingo-on-making-knife-and-support-play-nice">Shaun Domingo on making knife and support play nice</h2>
<p>Support get queries about rails apps, etc. Ask engineers but they are busy,
etc. Support staff should be able to interrogate things.</p>
<p>Building on top of <a href="http://docs.opscode.com/chef/knife.html">knife</a> and knifeblock (manage knife configurations).
Plugin allowing support staff to download application keys (to interact with
APIs on their behalf), talk to APIs, generate knifeblock configuration and then
help resolve issues.</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a>	<span class="co"># List apps.</span></span>
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a>	<span class="ex">knife</span> ninefold-internal <span class="at">-l</span></span>
<span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a>	<span class="co"># Generate knifeblock configuration.</span></span>
<span id="cb1-4"><a href="#cb1-4" aria-hidden="true" tabindex="-1"></a>	<span class="ex">knife</span> ninefold-internal <span class="at">-a</span> 23 <span class="at">-g</span></span>
<span id="cb1-5"><a href="#cb1-5" aria-hidden="true" tabindex="-1"></a>	<span class="co"># Activate the knifeblock configuration.</span></span>
<span id="cb1-6"><a href="#cb1-6" aria-hidden="true" tabindex="-1"></a>	<span class="ex">knife</span> block dev-NF00000004-23</span>
<span id="cb1-7"><a href="#cb1-7" aria-hidden="true" tabindex="-1"></a>	<span class="co"># Do stuff to help investigate and resolve customer's problem.</span></span>
<span id="cb1-8"><a href="#cb1-8" aria-hidden="true" tabindex="-1"></a>	<span class="ex">knife</span> ...</span></code></pre></div>]]></summary>
</entry>

</feed>
