a tornado of razorbladeshttp://adam.heroku.com/2011-05-09T09:22:53-07:00Adam WigginsApplying the Unix Process Model to Web Appshttp://adam.heroku.com/past/2011/5/9/applying_the_unix_process_model_to_web_apps/2011-05-09T09:22:53-07:002011-05-09T09:22:53-07:00Adam Wiggins<p>The unix process model is a simple and powerful abstraction for running server-side programs. Applied to web apps, the process model gives us a unique way to think about dividing our workloads and scaling up over time.</p><p>The unix process model is a simple and powerful abstraction for running server-side programs. Applied to web apps, the process model gives us a unique way to think about dividing our workloads and scaling up over time.</p>
<h2 id='process_model_basics'>Process model basics</h2>
<p>Let’s begin with a simple illustration of the basics of the process model, using a well-known unix daemon: memcached.</p>
<p>Download and compile it:</p>
<pre><code>$ wget http://memcached.googlecode.com/files/memcached-1.4.5.tar.gz
$ tar xzf memcached-1.4.5.tar.gz
$ cd memcached-1.4.5
$ ./configure
$ make</code></pre>
<p>Run the program:</p>
<pre><code>$ ./memcached -vv
...
<17 server listening (auto-negotiate)
<18 send buffer was 9216, now 3728270</code></pre>
<p>This running program is called a <strong>process</strong>.</p>
<p>Running manually in a terminal is fine for local development, but in a production deployment we want memcached to be a <strong>managed</strong> process. A managed process should run automatically when the operating system starts up, and should be restarted if the process crashes or dies for any reason.</p>
<p>We can use a <strong>process manager</strong> to put processes under management. There are many process managers, but operating systems usually have defaults. On OS X, <a href='http://launchd.macosforge.org/'>launchd</a> is the built-in process manager; on Ubuntu, <a href='http://upstart.ubuntu.com/'>Upstart</a> the built-in process manager.</p>
<p>Let’s set up memcached to run as a managed process on Ubuntu. Write an Upstart config:</p>
<h4 id='etcinitmemcachedconf'>/etc/init/memcached.conf</h4>
<pre><code>description "Memcached"
exec /usr/bin/memcached >> /var/log/memcached.log
start on runlevel [345]
respawn</code></pre>
<p>We can now tell Upstart to start our process for the first time:</p>
<pre><code>$ start memcached
memcached start/running, process 1212</code></pre>
<p>The memcached process is now running in the background, managed by the process manager, with its output stream going to <code>/var/log/memcached.log</code>.</p>
<p>Now that we’ve established a baseline for the process model, we can put its principles to work in more novel way: running a web app.</p>
<h2 id='mapping_the_unix_process_model_to_web_apps'>Mapping the unix process model to web apps</h2>
<p>A server daemon like memcached has a single entry point, meaning there’s only one command you run to invoke it. Web apps, on the other hand, typically have two or more entry points. Each of these entry points can be called a <strong>process type</strong>.</p>
<p>A basic Rails app will typically have two process types: a Rack-compatible web process (such as Webrick, Mongrel, or Thin), and a worker process using a queueing library (such as Delayed Job or Resque). For example:</p>
<table>
<tr style='background: #666'><th style='padding: 0.3em'>Process type</th><th>Command</th></tr>
<tr><td>web</td><td style='padding-left: 1em; font-family: monospace'>bundle exec rails server</td></tr>
<tr><td>worker</td><td style='padding-left: 1em; font-family: monospace'>bundle exec rake jobs:work</td></tr>
</table>
<p>A basic Django app looks strikingly similar: a web process can be run with the <code>manage.py</code> admin tool; and background jobs via Celery.</p>
<table>
<tr style='background: #666'><th style='padding: 0.3em'>Process type</th><th>Command</th></tr>
<tr><td>web</td><td style='padding-left: 1em; font-family: monospace'>python manage.py runserver</td></tr>
<tr><td>worker</td><td style='padding-left: 1em; font-family: monospace'>celeryd --loglevel=INFO</td></tr>
</table>
<p>Process types differ for each app. For example, some Rails apps use Resque instead of Delayed Job, or have multiple types of workers. Every app needs to declare its own process types.</p>
<p>Declaration of process types is conceptually similar to declaration of dependencies. In the Ruby world, Gem Bundler and the <code>Gemfile</code> give us a declarative, canonical way to specify the gem dependencies for an app. We need the equivalent of <code>Gemfile</code> and Bundler, but for process types.</p>
<h2 id='procfile_a_format_to_declare_your_process_types'>Procfile, a format to declare your process types</h2>
<p><code>Procfile</code> is an extremely simple file format which allows you to declare the process types your app uses. Its format is one process type per line, with each line formatted as:</p>
<pre><code><process type>: <command></code></pre>
<p>A Rails app might have a <code>Procfile</code> like this:</p>
<pre><code>web: bundle exec rails server -p $PORT
worker: bundle exec rake jobs:work</code></pre>
<p>One purpose for this is structured documentation - a developer can view the <code>Procfile</code> to see the app’s process architecture, just as they can view the <code>Gemfile</code> to see its dependencies. But the greater utility of <code>Procfile</code> lies in our ability to parse the file and run the app’s processes automatically.</p>
<h2 id='foreman_a_process_manager_for_local_development'>Foreman, a process manager for local development</h2>
<p><a href='http://blog.daviddollar.org/2011/05/06/introducing-foreman.html'>Foreman</a> is a handy command-line tool written by David Dollar. It reads your <code>Procfile</code> and runs one process for each process type declared by your app.</p>
<p>Install it:</p>
<pre><code>$ gem install foreman</code></pre>
<p>If you’ve written a <code>Procfile</code> (such as the one shown in the previous section) and put it in the root of your app, you can now run it like this:</p>
<img src='http://s3.amazonaws.com/adamheroku_blog/foreman_screenshot.png' alt='Foreman screenshot' style='border: 2px solid #222' />
<p>Foreman runs one process for each process type that we’ve declared. Once running, the output streams for each running process are conveniently interleaved in the foreground on our terminal. Each line is prefixed with a timestamp and the name of the running process, and color-coded by which process emitted which line.</p>
<p>Foreman is a process manager in the same sense as launchd or Upstart, but tailored to the needs of app development. It runs only a single app at a time, with all processes in the foreground, and terminates if any process crashes or if you press Ctrl-C.</p>
<h2 id='using_procfile_for_deployment'>Using Procfile for deployment</h2>
<p>Bundler has a <code>--deployment</code> command-line option, allowing you to use your app’s <code>Gemfile</code> to set up gems on your production server. <code>Procfile</code> and Foreman can be used in a similar fashion, using a feature of Foreman to export to a process manager format of your choice.</p>
<p>For example, let’s deploy a <code>Procfile</code>-backed Rails app to an Ubuntu server, selecting Upstart as the export format. As root, run the following from wherever your <code>Procfile</code> is located:</p>
<pre><code>$ foreman export upstart /etc/init
[foreman export] writing: /etc/init/myapp.conf
[foreman export] writing: /etc/init/myapp-web.conf
[foreman export] writing: /etc/init/myapp-web-1.conf
[foreman export] writing: /etc/init/myapp-worker.conf
[foreman export] writing: /etc/init/myapp-worker-1.conf
$ start
myapp start/running, process 28572</code></pre>
<p>Your app is now running as two managed processes. You can use all of Upstart’s control capabilities, such as restarting the app when deploying a new release of your code:</p>
<pre><code>$ restart myapp
myapp start/running, process 28591</code></pre>
<h2 id='process_types_vs_processes'>Process types vs processes</h2>
<p>To scale up, we’ll want full grasp of the relationship between process types and processes.</p>
<p>A <strong>process type</strong> is the prototype from which one or more <strong>processes</strong> are instantiated. This is similar to the way a <strong>class</strong> is the prototype from which one or more <strong>objects</strong> are instantiated in object-oriented programming.</p>
<p>Here’s a visual aid showing the relationship between processes (on the vertical axis) and process types (on the horizontal axis):</p>
<img src='http://s3.amazonaws.com/adamheroku_blog/process_diagram.png' style='border: 2px solid #222' />
<p>Processes, on the vertical axis, are <strong>scale</strong>. You increase this direction when you need to scale up your concurrency for the type of work handled by that process type. Foreman lets you specify concurrency for each process type when you export with the <code>-c</code> option. To get a process formation matching the diagram, you’d use this command:</p>
<pre><code>$ foreman export upstart /etc/init -c web=2 -c worker=4 -c clock=1</code></pre>
<p>Process types, on the horizontal axis, are <strong>workload diversity</strong>. Each process type specializes in a certain type of work.</p>
<p>For example, some apps have two types of workers, one for urgent jobs and another for long-running jobs. By subdividing into more specialized workers, you can get better responsiveness on your urgent jobs and more granular control over how to spend your compute resources.</p>
<p>Scheduling work at a certain time of day (e.g., the equivalent of cron) can be achieved with a specialized process type: a library like <a href='https://github.com/bvandenbos/resque-scheduler#readme'>resque-scheduler</a> or <a href='http://adam.heroku.com/past/2010/6/30/replace_cron_with_clockwork/'>Clockwork</a> can be run as a singleton process for a very flexible cron replacement. <a href='http://adam.heroku.com/past/2010/3/19/consuming_the_twitter_streaming_api/'>Consuming the Twitter streaming API</a> is another type of specialized work best served by a singleton process.</p>
<p>Pulling all of these potential use cases together, here’s an example of a <code>Procfile</code> for an app with five process types: a Sinatra web app, two types of Resque workers, a singleton clock with Clockwork, and a singleton ruby script consuming the Twitter streaming API:</p>
<pre><code>web: bundle exec ruby web.rb -p $PORT
fastworker: QUEUE=urgent bundle exec rake resque:work
slowworker: QUEUE=* bundle exec rake resque:work
clock: bundle exec clockwork clock.rb
tweetscan: bundle exec ruby tweetscan.rb</code></pre>
<p>When we run this <code>Procfile</code> with Foreman, we’ll give five processes - one for each process type. In production, we can use Foreman’s concurrency argument to fan out to dozens or even hundreds of running processes, potentially spread out across multiple machines.</p>
<h2 id='conclusion'>Conclusion</h2>
<p>The unix process model is a powerful way to approach running your web app. <code>Procfile</code> gives us a way to declare process types, and Foreman gives us an easy way to run the app’s processes in both development and deployment environments.</p>How To Scale a Development Teamhttp://adam.heroku.com/past/2011/4/28/scaling_a_development_team/2011-04-28T09:56:38-07:002011-04-28T09:56:38-07:00Adam Wiggins<p>As hackers, we’re familiar with the need to scale web servers, databases, and other software systems. An equally important challenge in a growing business is scaling your development team.</p><p>As hackers, we’re familiar with the need to scale web servers, databases, and other software systems. An equally important challenge in a growing business is scaling your development team.</p>
<p>Most technology companies hit a wall with dev team scalability somewhere around ten developers. Having navigated this process fairly successfully over the last few years at Heroku, this post will present what I see as the stages of life in a development team, and the problems and potential solutions at each stage.</p>
<h2>Stage 1: Homebrewing</h2>
<p>In the beginning, your company is 2 - 4 guys/gals working in someone’s living room, a cafe, or a coworking space. Communication and coordination is easy: with just a few people sitting right next to each other, everyone knows what everyone else is working on. Founders and early employees tend to be very self-directed so the need for management is nearly non-existent. Everyone is a generalist and works on a little bit of everything. You have a single group chat channel and a single all@yourcompany.com mailing list. There’s no real need to track any tasks or even bugs. A full copy of the state of the entire company and your product is easily contained within everyone’s brain.</p>
<p>At this stage, you’re trying to create and vet your minimum viable product, which is a fancy way of saying that you’re trying to figure out what you’re even doing here. Any kind of structure or process at this point will be extremely detrimental. Everyone has to be a generalist and able to work on any kind of problem - specialists will be (at best) somewhat bored and (at worst) highly distracting because they want to steer product development into whatever realm they specialize in.</p>
<h2>Stage 2: The first hires</h2>
<p>Once you’ve gotten a little funding and been able to hire a few more developers, for a total of 5 - 9, you may find that the ad-hoc method of coordination (expecting to overhear everything of importance by sitting near teammates) starts to break down. You have both too much communication (keeping tabs on six other people’s work is time-consuming) and too little communication (you end up colliding on trying to fix the same bug, answer the same support email, or respond to the same Nagios page).</p>
<p>At this point, you want to add just a sprinkle of structure: maybe an iteration planning on Monday, daily standups, and tracking big to-do items and bugs on a whiteboard or in a simple tool like <a href='http://lighthouseapp.com/'>Lighthouse</a>. Perhaps you switch to a support system like <a href='http://www.zendesk.com/'>Zendesk</a> where incoming support requests can be assigned, and you add a simple on-call rotation for pages via <a href='http://www.pagerduty.com/'>Pagerduty</a>. Your single internal chat and email channels continue to work fine.</p>
<p>Resist the urge to introduce too much structure and process at this point. Some startups, on reaching this stage, declare “we’ve got to grow up and act like a real company now” and immediately try to switch to heavy-handed tactics. For example: full-fledged SCRUM, heavyweight tools like Jira, or hiring a project manager or engineering manager. Don’t do that stuff. You’ve got a team that works well together in an ad-hoc way; you probably have some natural leaders on the team who direct a lot of the work while still being hands-on themselves; and while your product is launched and in the hands of users, in many ways you’re still trying to figure out what your company is really all about. Introducing bureaucracy into this environment is almost guaranteed to block you from doing what you’re really supposed to be doing, which is <a href='http://steveblank.com/2010/04/12/why-startups-are-agile-and-opportunistic-%E2%80%93-pivoting-the-business-model/'>pivoting in search of your scalable business model</a>.</p>
<p>Focus at this stage is key. Everyone is still a generalist, but the whole development team should be aligned behind a single goal (aka milestone) at a time. If you try to attack multiple battlefronts at once, and you’ll do everything badly. <a href='http://www.growthink.com/content/which-worse-entrepreneurs-indigestion-or-starvation'>Great companies are more likely to die indigestion from too much opportunity than starvation from too little.</a> Pick your battles carefully and stay focused.</p>
<h2>Crisis on the brink of Stage 3</h2>
<p>Grow to 10 - 15 developers, and you’re on the verge of a major team structure change. I’ve been told that many promising startups have been killed by failing to weather the transition between these stages.</p>
<p>With this many developers, iteration planning, standups, or any other kind of development-team meeting has become so big that the attendees spend most of their time bored. Any individual developer will find it difficult to find a sense of purpose or shared direction in the midst of trudging through laundry lists of details on other people’s work.</p>
<p>In programming, when a class or sourcefile gets to big, the solution is to break it down into smaller pieces. The same principle holds for scaling a development organization. You need to break into targeted teams.</p>
<h2>Stage 3: Breaking into teams</h2>
<p>Dividing your single team of generalists is harder than it sounds. Draw the fences in the wrong place, and you’ll create coordination problems that make things even worse. Find the right places to divide and you’ll see a massive increase in focus, happiness, and productivity.</p>
<p>The key to a good team is a well-defined sphere of authority, with clear interfaces to other teams. The team should own the vision and direction for the part of your product that it works on. It should be able to operate with maximum autonomy on everything it owns without having to ask for permission or information from other teams, except for the infrequent case of a feature or bug that crosses team boundaries.</p>
<p>A close mapping between your software architecture and your team architecture will be a big help here. By this time you have probably already converted your monolithic application into a distributed system of multiple components communicating over REST, AMQP, or other RPC mechanism. (And if not, you should strongly consider doing so, coincident with your dev team split.) There should be an obvious mapping between software components - each of which has their own source repository and deployment location/procedure - and your nascent teams.</p>
<p>Deciding what person goes on what team will be somewhat arbitrary at first. My approach was to sit down with each developer and dig in to try to understand what parts of the system they were most passionate about working on. From there I divided up the teams as best I could. Some people found perfect homes on their first team assignment, others were dissatisfied and needed to transfer to another team fairly quickly. Over time, the team territories became very well-defined, so it became much easier to slot new hires in the right place. Let developers follow their own passions and they will gravitate toward the team where they will do the best work.</p>
<p>Separately, you should have found your product/market fit by this point. If you’ve grown to this size and are still figuring out your company’s meaning for existence, you’ve got big problems. If that’s the case, stop growing, and scale back down until you nail product/market fit.</p>
<h2>Specialization</h2>
<p>Another reason to break into teams is specialization. Types of engineering specialists include ops engineers/sysadmins, infrastructure developers, front-end web developers, back-end web developers, business engineers / data analysts, and developers who focus on a particular language. Language specialists are becoming more common, because many internet-scale companies write high-concurrency components in functional programming like Erlang, Scala, or Clojure, generally handled by a different set of developers than the authors of the Ruby, Python, or PHP web components.</p>
<p>Early on, specialists are rarely desirable. There’s too many different layers to work on in delivering a software product relative to the number of people available to contribute, so the everyone pitches in on everything. This may put a developer doing such far-ranging work such as ops projects like kernel updates on the OS, to front-end projects like writing JQuery effects for the UI.</p>
<p>Once you reach the point where you’ve got a dozen developers, your product has reached a level of usage and maturity where the problems are getting much harder. Scaling the database is something that is not only a full-time job, but requires a deep level of specialized knowledge that can’t be acquired if that person is also simultaneously learning to be a JQuery expert and an iOS expert and an Erlang expert.</p>
<p>You need people who can and are willing to focus on just a few closely related areas so that they can build very deep knowledge in those areas. Some of these will be your existing generalists deciding to specialize, and some will be new hires. You can now hire for the kind of specialist that would not have been appropriate when your company was smaller. Generalists are always useful to have around, and some of them may move into management - filling business owner roles for a team, rather than hands-on development.</p>
<h2>Heroku's first teams</h2>
<p>Heroku’s initial team breakdown looked like this:</p>
<ul>
<li>API - Owns our user-facing web app and the matching Heroku client gem.</li>
<li>Data - Builds and runs our PostgreSQL-as-a-service database product.</li>
<li>Ops - Shepherds and protects availability of the production system.</li>
<li>Routing - Manages everything necessary to get HTTP requests routed to user web processes.</li>
<li>Runtime - Handles packaging code for deploy and starting/stopping/managing user processes.</li>
</ul>
<p>Each of these teams owns between one and five components. For example, the API team owns the Rails app which runs at <a href='https://api.heroku.com/'>api.heroku.com</a> and the Heroku client gem. The Data team owns the provisioning and monitoring tool for our database service, as well as all of the individual running databases. (Peter van Hardenburg was the intrapreneur who founded and now leads our Data team. He tells a bit of that story in the later part of <a href='http://www.youtube.com/watch?v=k-RIMFvVGoc'>this video</a>.)</p>
<h2>Team size and roles</h2>
<p>For us, the ideal team layout has been two developers and one business owner. One developer is not enough over the long term (they need a second pair of eyes on the code, and besides, one is a lonely number). Three developers works fine as well. Get to four or five and things start to become a bit crowded; there may not be enough surface area for them to all work without stepping on each others’ toes constantly. Almost all of Heroku’s teams have two developers.</p>
<p>“Business owner” is a somewhat clumsy term, but it’s the best we’ve come to describe the person doing some combination of product management, project management, and general management for the team. The business owner fills the important role of knowing the business value of the team’s work to the company and how it fits in with the larger product. They can broker cross-team communication, help prioritize projects and tasks by business value, and may provide status reports on the team’s progress or presentations to the senior executives and/or the entire company to justify the team’s ongoing existence.</p>
<p>I’m a fan of hacker-entrepreneurs in the business owner role: a strong technical background means they have an in-depth understanding of the work being done, and are able to command huge respect from those whose work they are directing. This sort of person is not necessarily available for all teams, but find them when you can. In many cases it involves quite a bit of convincing to get a hacker to give up coding as their primary function.</p>
<p>Avoid having developers belong to more than one team. They are makers and need to be able to focus their full attention on their team’s current projects without distraction or attempts at multitasking. Business owners, however, can sometimes belong to multiple teams. It’s not always a full-time job, and there are benefits to cross-team communication by having one person be a business owner for two or more related teams.</p>
<h2>Cohesion</h2>
<p>In the earlier stages, you should avoid attacking on multiple battlefronts, and instead keep all developers focused around a single goal for the company. With creation of fiefdoms for each team, this has changed. Now you can and should attack on multiple battlefronts. Each team should be executing independently against its own goals, and not worrying too much about what other teams are doing.</p>
<p>It’s awesome to be able to pursue three, four, five big goals simultaneously. A few months after breaking into teams at Heroku, we had a day where three different teams were all releasing major new features. It’s an incredible feeling.</p>
<p>But now you have a new problem: lack of cohesion. Your decentralized teams are setting their own roadmaps and deciding on features independently. But to avoid fragmentation in your product, someone needs to decide an overall direction and set of product values. More succinctly: you need a strategy.</p>
<p>But this post is long enough as it is. I’ll save discussion of cohesion and strategy for another time.</p>Ephemeralizationhttp://adam.heroku.com/past/2011/4/7/ephemeralization/2011-04-07T12:45:11-07:002011-04-07T12:45:11-07:00Adam Wiggins<p>Paul Graham’s <a href='http://www.paulgraham.com/tablets.html'>essay on tablets</a> referenced a fascinating term I hadn’t heard before: “ephemeralization.” <a href='http://en.wikipedia.org/wiki/Ephemeralization'>Wikipedia describes it</a> as “the ability of technological advancement to do ‘more and more with less and less until eventually you can do everything with nothing’.”</p><p>Paul Graham’s <a href='http://www.paulgraham.com/tablets.html'>essay on tablets</a> referenced a fascinating term I hadn’t heard before: “ephemeralization.” <a href='http://en.wikipedia.org/wiki/Ephemeralization'>Wikipedia describes it</a> as “the ability of technological advancement to do ‘more and more with less and less until eventually you can do everything with nothing’.”</p>
<h2>An example: video playback technology</h2>
<p>Fifty years ago, the only option for watching video was an entire movie theater, with a huge projector fixed in place, and film reels the size of barrels for a single movie. The 1980s gave us VCRs and VHS tapes: a playback device that you could carry with two hands and that offered more features than a movie theater (like pause, fast forward, and rewind); the tapes were small enough that you could keep a reasonably sized movie library on your bookshelf. In the 1990s we got DVD players and DVDs, shrinking the playback device yet smaller, shrinking the movie media (tapes -> DVDs) yet smaller, and offering yet more features (like higher resolution).</p>
<p>In the 2000s Playstaions, XBoxes, and computers appeared with built-in DVD players, shrinking the playback device to nothing (it became part of a device you already owned). And in 2010, with Netflix streaming, you have instant access to tens of thousands of movies without needing any physical media at all. The playback device and media have both shrunk to have no corporeal representation whatsoever (hence “ephemeralized”), yet you have access to movie movies and more features for playback of those movies than ever in the past.</p>
<h2>Ephemeralization at Heroku</h2>
<p>Heroku is a company built on the premise that running a software as a service can be ephemeralized. Where Netflix streaming eliminates dedicated playback devices and media, Heroku eliminates servers, routers, and most or all systems administration.</p>
<p>Ephemeralization is a core value of our engineering and product design approaches. I believe this has been a big part of our success: internally, it helps us succeed at building a scalable, maintainable infrastructure; and externally, it helps us succeed at offering a lean product which has not turned into a swiss army knife despite ever-expanding capabilities.</p>
<h2>Machete, not a swiss army knife</h2>
<p>One of our core values on product design is that we want to create a machete, not a swiss army knife. A machete is a simple tool that has wide application to many tasks. A swiss army knife is a complex tool that has specialized gadgets for each task you might want to perform. (See <a href='http://www.youtube.com/watch?v=3BhDLm9jo5Y'>James' startup school talk</a>, about 12m in, for further elaboration.)</p>
<p>Some examples of user-facing ephemeralization Heroku has executed:</p>
<ul>
<li>Switching from our custom gems manifest to the community, off-the-shelf solution of Gem Bundler. This allowed us to maintain much less code and documentation, while offering a more sophisticated gem dependency system. Rails 3 comes with a Gemfile out of the box, so to the user’s perspective, the effort of declaring your version of Rails as a gem dependency has disappeared.</li>
<li>Our <a href='http://blog.heroku.com/archives/2010/12/13/logging/'>new logging system</a> merges all logs into a single stream, so <code>logs:cron</code> is no longer a separate codepath. Users can still filter to just their cron logs, but this is done via a <a href='http://devcenter.heroku.com/articles/logging#filtering'>general-purpose filtering interface</a>.</li>
<li>Those of you who have been using Heroku since the beginning will recall we once had a single sign-on system (known as <code>heroku_user</code>) that allowed you to use your Heroku user login to log into your app. This was a cool feature, but in the end the maintenance cost was high and the gain low. More general-purpose, standards-based solutions such as Google Apps login became commonplace (with tools like <a href='https://github.com/hassox/warden/wiki/overview'>Warden</a> to help Rubyists use them), so we removed <code>heroku_user</code> altogether and let users roll their own.</li>
</ul>
<p>Each of these changes gave our platform a more machete-like user experience.</p>
<h2>Ephemeralizing infrastructure</h2>
<p>Internally, we’re always looking for opportunities to reduce or eliminate infrastructure.</p>
<p>One example of this was when we switched from a specialized server type for our main database (the one that contains our user, app, and billing records) to a database running on the same system we use to provision databases for Heroku user apps. <a href='http://en.wikipedia.org/wiki/Self-hosting'>Self-hosting</a> gets us more leverage out of our existing database management and monitoring tools.</p>
<p>We also look to replace internally-built tools with off-the-shelf solutions whenever we can. Two examples of this are when we switched from a custom-built pager system to <a href='http://www.pagerduty.com/'>PagerDuty</a>, and switching from a custom-built logging system to syslog and <a href='http://www.splunk.com/'>Splunk</a>.</p>
<p>Each of these changes made our ability to manage and scale our infrastructure substantially easier. Fewer moving parts means less to keep track of, less to worry about, and less to go wrong.</p>
<h2>Applying ephemeralization at your company</h2>
<p>If you decide that you’d like to apply this principle at your company, how do you do it?</p>
<p>Everyone is your company should be constantly pushing to do more with less. This means being willing to look at every component, every user-facing feature, and every line of code with a critical eye. Some questions you should be constantly asking:</p>
<ul>
<li>What can replace with third-party solutions? (like Heroku did with gems manifest -> Bundler, custom pager -> PagerDuty, and custom logger -> syslog/splunk)</li>
<li>What user-facing features can be merged together to create a more machete-like UX? (like Heroku did with cron:logs -> logs)</li>
<li>Where can we generalize an existing system in order to have it take over the duties of a more specialized system? (like Heroku did with our specialized database server -> self-hosted database)</li>
<li>What can we eliminate completely when its cost vs benefit analysis comes up short? (like Heroku did with our <code>heroku_user</code> single sign-on system)</li>
</ul>
<p>Proposals to ephemeralize a component or feature will sometimes be met with emotionally-charged responses from your team. It’s totally reasonable to feel attached to a component everyone has worked hard on and has been important historically. But realize that the component has value because it got you to where you are today, not necessarily because of its ongoing existence in the future.</p>
<p>Everything Heroku has ever ephemeralized out of our infrastructure was part of our journey to the product and infrastructure we have today. I don’t regret for a moment all the time I spent coding on our single sign-on solution, our old cron log fetcher, our gems manifest, or any of a host of other things that are either gone or are fading out of our product today.</p>
<p>Referencing to video example again: DVDs were a fantastic bit of innovation and brought the world forward into the modern age of movie-watching. But our love of DVDs shouldn’t be a blocker to us adopting new technologies with more capabilities and a smaller footprint, like on-demand streaming video.</p>
<h2>Conclusion</h2>
<p>Every month, Heroku strives to do more with less. More: users, traffic, capabilities, and versatility for the users. Less: lines of code, components, moving parts, APIs, server types, tools. Ephemeralization is how we keep our product and our infrastructure lean and nimble over the long term.</p>Logs Are Streams, Not Fileshttp://adam.heroku.com/past/2011/4/1/logs_are_streams_not_files/2011-04-01T07:29:49-07:002011-04-01T07:29:49-07:00Adam Wiggins<p>Server daemons (such as PostgreSQL or Nginx) and applications (such as a Rails or Django app) sometimes offer a configuration parameter for a path to the program’s logfile. This can lead us to think of logs as files.</p><p>Server daemons (such as PostgreSQL or Nginx) and applications (such as a Rails or Django app) sometimes offer a configuration parameter for a path to the program’s logfile. This can lead us to think of logs as files.</p>
<p>But a better conceptual model is to treat logs as time-ordered streams: there is no beginning or end, but rather an ongoing, collated collection of events which we may wish to view in realtime as they happen (e.g. via <code>tail -f</code> or <code>heroku logs --tail</code>) or which we may wish to search in some time window (e.g. via <code>grep</code> or Splunk).</p>
<h2>Using the power of unix for logs</h2>
<p>Unix provides some excellent tools for handling streams. There are two default output streams, <code>stdout</code> and <code>stderr</code>, available automatically to all programs. Streams can be turned into files with a redirect operator, but they can also be channeled in more powerful ways, such as splitting the streams to multiple locations or pipelining the stream to another program for further processing.</p>
<p>A program that uses <code>stdout</code> for its logging can easily log to any file you wish:</p>
<pre><code>$ mydaemon >> /var/log/mydaemon.log</code></pre>
<p>(Typically you would not invoke this command directly, but would run this from an init program such as Upstart or Systemd.)</p>
<p>Programs that send their logs directly to a logfile lose all the power and flexibility of unix streams. What’s worse is that they end up reinventing some of these capabilities, badly. How many programs end up re-writing log rotation, for example?</p>
<h2>Distributed logging with syslog</h2>
<p>Logging on any reasonably large distributed system will generally end up using the syslog protocol to send logs from many components to a single location. Programs that treat logs as files are now on the wrong path: if they wisht to log to syslog, each program needs to implement syslog internally - and provide yet more logging configuration options to set the various syslog fields.</p>
<p>A program using <code>stdout</code> for logging can use syslog without needing to implement any syslog awareness into the program, by piping to the standard <code>logger</code> command available on all modern unixes:</p>
<pre><code>$ mydaemon | logger</code></pre>
<p>Perhaps we want to split the stream and log to a local file as well as syslog:</p>
<pre><code>$ mydaemon | tee /var/log/mydaemon.log | logger</code></pre>
<p>A program which uses <code>stdout</code> is equipped to log in a variety of ways without adding any weight to its codebase or configuration format.</p>
<h2>Other distributed logging protocols</h2>
<p>Syslog is an entrenched standard for distributed logging, but there are other, more modern options as well. <a href='http://www.splunk.com/'>Splunk</a>, fast becoming a indispensable tool for anyone running a large software service, can accept syslog; but it also has its own custom protocol which offers additional features like authentication and encryption. <a href='https://github.com/facebook/scribe/wiki'>Scribe</a> is another example of a modern logging protocol.</p>
<p>Programs that log to <code>stdout</code> can be adapted to work with a new protocol without needing to modify the program. Simply pipe the program’s output to a receiving daemon just as you would with the <code>logger</code> program for syslog. Treating your logs as streams is a form of <a href='http://en.wikipedia.org/wiki/Future_proof'>future-proofing</a> for your application.</p>
<h2>Logging in the Ruby world</h2>
<p>Most Rack frameworks (Sinatra, Ramaze, etc) and Rack webservers (Mongrel, Thin, etc) do the right thing: they log to <code>stdout</code>. If you run them in the foreground, as is typical of development mode, you see the output right in your terminal. This is exactly what you want. If you run in production mode, you can redirect the output to a file, to syslog, to both, or to any other logging system that can accept an input stream.</p>
<p>Unfortunately, Rails stands out as a major exception to this simple principle. It creates its own log directory and writes various files into it; some plugins even take it upon themselves to write their own, separate logfiles. This hurts the local development experience: what you see in your terminal isn’t complete, so you have to open a separate window with <code>tail -f log/*.log</code> to get the information you want. But it hurts the deployment experience even more, because you end up having to tinker around with a bunch of Rails logger configuration options to get your logs from all your web machines to merge into a single stream.</p>
<h2>Logging on Heroku</h2>
<p>The need to treat application logs as a stream is especially poignant with <a href='http://blog.heroku.com/archives/2010/12/13/logging/'>Heroku's new logging system</a>. On the backend, we route logs with a syslog router written in Erlang called <a href='https://github.com/heroku/logplex'>Logplex</a>.</p>
<p>Logplex handles input streams (which we call “sinks”) from many different sources: all the dynos running on the app, system components like our HTTP router, and (currently in alpha) logs from add-on providers. Sinks are merged together into channels (each app has its own channel) which is a unified stream of all logs relevant to the app. This allows developers to see a holistic view of everything happening with their app, or to filter down to logs from a particular type of sink (for example: just logs from the HTTP router, or just logs from worker processes).</p>
<p>Further, log streams can also be sent outbound, which we call “drains.” Users can configure syslog drains, and we’re currently working up a technical design for how add-on providers can automatically add drains. This latter item will enable a new class of log search and archival add-on, most notably the emerging syslog-as-a-service products like <a href='http://www.loggly.com/'>Loggly</a> and <a href='https://papertrailapp.com/'>Papertrail</a>.</p>
<p>This logging system works quite well, and it gets even better with the new features on the way - but it only works where all programs output their logs as streams. Programs that write logfiles, such as Rails in its default configuration, don’t make sense in this world.</p>
<p>As a workaround, Heroku injects the <a href='https://github.com/ddollar/rails_log_stdout/blob/master/init.rb'>rails_log_stdout</a> plugin into Rails apps at deploy time. We’d prefer not to have to do this (injecting code is a dicey way to solve problems), but it’s the best way to get Rails logs into the app’s logstream without requiring extra configuration from the app developer.</p>
<h2>Conclusion</h2>
<p>Logs are a stream, and it behooves everyone to treat them as such. Your programs should log to <code>stdout</code> and/or <code>stderr</code> and omit any attempt to handle log paths, log rotation, or sending logs over the syslog protocol. Directing where the program’s log stream goes can be left up to the runtime container: a local terminal or IDE (in development environments), an Upstart / Systemd launch script (in traditional hosting environments), or a system like Logplex/Heroku (in a platform environment).</p>Memcached, a Database?http://adam.heroku.com/past/2010/7/19/memcached_a_database/2010-07-19T11:53:00-07:002010-07-19T11:53:00-07:00Adam Wiggins<p>In my QCon talk <a href='http://www.infoq.com/presentations/Horizontal-Scalability'>Horizontal Scalability via Transient, Shardable, Share-Nothing Resources</a>, I argued that memcached is the father of modern shardable resources. Today’s NoSQL key-value stores all owe some part of their inspiration to memcached. Even feature-rich datastores such as CouchDB or Cassandra also borrow a cornerstone idea from memcached: throw away some features historically associated with databases in order to make big gains in scalability and resiliency.</p><p>In my QCon talk <a href='http://www.infoq.com/presentations/Horizontal-Scalability'>Horizontal Scalability via Transient, Shardable, Share-Nothing Resources</a>, I argued that memcached is the father of modern shardable resources. Today’s NoSQL key-value stores all owe some part of their inspiration to memcached. Even feature-rich datastores such as CouchDB or Cassandra also borrow a cornerstone idea from memcached: throw away some features historically associated with databases in order to make big gains in scalability and resiliency.</p>
<p>Memcached was created to be a cache, as its name implies. But developers eventually discovered that it was useful for storing many types of transient data, such as <a href='http://lists.danga.com/pipermail/memcached/2006-June/002384.html'>sessions</a>, <a href='http://code.google.com/appengine/articles/scaling/memcache.html#transient'>page-view counters</a>, or <a href='http://simonwillison.net/2009/Jan/7/ratelimitcache/'>API rate limiting counters</a>.</p>
<p>App developers storing data in memcached instead of their SQL database? Does that mean that memcached can be classified as a type of database system?</p>
<h2>First Principles</h2>
<p>To answer that question, we have to work our way back to a definition for the family of software typically referred to as “databases.” I’m going to use the term datastore, because it seems more natural when applied to modern NoSQL options. (For simplicity’s sake, let’s assume that datastore, database, database system, and DBMS are all roughly synonymous.)</p>
<p>Here’s my definition:</p>
<blockquote>
<p>A datastore is software that stores atomic chunks of data known as records, and allows those records to be retrieved later.</p>
</blockquote>
<p>Datastores are a superset that includes relational databases, graph databases, key-value stores, and document databases. DBM, Tokyo Cabinet, Redis, S3, MySQL, PostgreSQL, CouchDB, MongoDB, Neo4j, and Hadoop are all part of this big happy family. Now onto the question of whether memcached belongs here as well.</p>
<h2>On Persistence</h2>
<p>Many would argue that memcached should be disqualified from being considered a datastore on account of its transience.</p>
<p>My definition above says that you can retrieve the data you’ve stored later. But what’s the duration of “later”? We expect datastores to be persistent - if they aren’t, what’s the point? But persistence does not have to be forever. It only needs to last as long as the application logic requires.</p>
<p>MongoDB offers capped collections and Redis offers expiring keys; in both of these cases, the fact that the data does not persistent forever is a feature. Memcached is a datastore which has extreme transience as a feature. How many times have application developers written nightly cron jobs to clean up old session data from their SQL datastore? Using memcached, you can skip this extra garbage-collection step. Memcache is a good fit for data that you want to last for a little while, but not forever.</p>
<h2>Conclusion</h2>
<p>Memcached set an early example for many patterns now prevalent in NoSQL. It got us thinking about how we can make trade-offs between datastore features and ease of scaling. Memcached occupies the far extreme of this spectrum: it trades away almost every feature we associate with database systems, keeping just the bare minimum, and in return it gets blinding speed and near-infinite horizontal scalability. That trade proved to be a worthwhile one, as memcached is now <a href='http://en.wikipedia.org/wiki/Memcached'>a critical piece of infrastructure for many of the world's largest web apps</a>.</p>
<p>The memcached case is a great example of how NoSQL is broadening how we think about data storage and retrieval. This has opened us up to a variety of specialized datastores: memcached, S3, and Hadoop, to pick some very successful examples. Each of these occupies a unique (and often very large) niche in the data storage space. We’ve learned that not all data is the same; the proliferation of options for how we store and retrieve our data is a natural consequence.</p>Replace Cron with Clockworkhttp://adam.heroku.com/past/2010/6/30/replace_cron_with_clockwork/2010-06-30T20:18:02-07:002010-06-30T20:18:02-07:00Adam Wiggins<p><a href='http://commons.wikimedia.org/wiki/File:Spring-cover_pocket_clock3_clockwork2.jpg'><img src='http://hirodusk.s3.amazonaws.com/clockwork.png' style='float: left; vertical-align: top ; margin-right: 16px; margin-bottom: 6px; border: 2px solid #777' /></a> If your app needs to poll a remote API once an hour, or send out an email report every evening, what tool do you reach for? Probably cron. Triggering events at a given wall clock time is what cron is for, but it works better at the system layer (e.g. rotating logs on a server) than at the app layer (e.g. sending out a daily report to your app’s users). I’ve described all the ways cron could be improved for app clock events <a href='http://adam.heroku.com/past/2010/4/13/rethinking_cron/'>in a previous post</a>.</p><p><a href='http://commons.wikimedia.org/wiki/File:Spring-cover_pocket_clock3_clockwork2.jpg'><img src='http://hirodusk.s3.amazonaws.com/clockwork.png' style='float: left; vertical-align: top ; margin-right: 16px; margin-bottom: 6px; border: 2px solid #777' /></a> If your app needs to poll a remote API once an hour, or send out an email report every evening, what tool do you reach for? Probably cron. Triggering events at a given wall clock time is what cron is for, but it works better at the system layer (e.g. rotating logs on a server) than at the app layer (e.g. sending out a daily report to your app’s users). I’ve described all the ways cron could be improved for app clock events <a href='http://adam.heroku.com/past/2010/4/13/rethinking_cron/'>in a previous post</a>.</p>
<p>My wishlist for an app-focused cron replacement, described in that post, can be fulfilled by a little hackery with a few available Ruby libraries (rufus-scheduler and resque-scheduler). But both of these libraries have weaknesses; so I decided to write my own, following their example of the lockless, single-process scheduler pattern.</p>
<p>The result is <a href='http://github.com/adamwiggins/clockwork'>Clockwork</a>.</p>
<h2>Using Clockwork</h2>
<p>First, the syntax for scheduling events:</p>
<code><pre><span class="ident">every</span> <span class="number">1</span><span class="punct">.</span><span class="ident">hour</span><span class="punct">,</span> <span class="punct">'</span><span class="string">apis.poll</span><span class="punct">'</span>
<span class="ident">every</span> <span class="number">1</span><span class="punct">.</span><span class="ident">day</span><span class="punct">,</span> <span class="punct">'</span><span class="string">reports.email</span><span class="punct">',</span> <span class="symbol">:at</span> <span class="punct">=></span> <span class="punct">'</span><span class="string">00:00</span><span class="punct">'</span>
</pre></code>
<p>A time period and a job name are the only required parameters. Options may include an hour and minute to run for daily jobs.</p>
<p>The job name is passed to your queueing system to enqueue a job, to be worked in one of your background job workers. (An important part of the lockless scheduler process pattern is that it never does any work itself, only queues up jobs for the workers to handle.) In order to make Clockwork queueing system-agnostic, the second bit of code you need is a small handler block that declares how to enqueue a job.</p>
<p>For example, if you’re using my favorite combo, <a href='http://adam.heroku.com/past/2010/4/24/beanstalk_a_simple_and_fast_queueing_backend/'>Beantstalk+Stalker</a>, your handler block will look like this:</p>
<code><pre><span class="ident">require</span> <span class="punct">'</span><span class="string">stalker</span><span class="punct">'</span>
<span class="ident">handler</span> <span class="punct">{</span> <span class="punct">|</span><span class="ident">job</span><span class="punct">|</span> <span class="constant">Stalker</span><span class="punct">.</span><span class="ident">enqueue</span><span class="punct">(</span><span class="ident">job</span><span class="punct">)</span> <span class="punct">}</span>
</pre></code>
<p>Put these two segments together into a file named clock.rb:</p>
<code><pre><span class="ident">require</span> <span class="punct">'</span><span class="string">stalker</span><span class="punct">'</span>
<span class="ident">handler</span> <span class="punct">{</span> <span class="punct">|</span><span class="ident">job</span><span class="punct">|</span> <span class="constant">Stalker</span><span class="punct">.</span><span class="ident">enqueue</span><span class="punct">(</span><span class="ident">job</span><span class="punct">)</span> <span class="punct">}</span>
<span class="ident">every</span> <span class="number">1</span><span class="punct">.</span><span class="ident">hour</span><span class="punct">,</span> <span class="punct">'</span><span class="string">apis.poll</span><span class="punct">'</span>
<span class="ident">every</span> <span class="number">1</span><span class="punct">.</span><span class="ident">day</span><span class="punct">,</span> <span class="punct">'</span><span class="string">reports.email</span><span class="punct">',</span> <span class="symbol">:at</span> <span class="punct">=></span> <span class="punct">'</span><span class="string">00:00</span><span class="punct">'</span>
</pre></code>
<h2>Running the Clock Process</h2>
<p>To run, install the clockwork gem (gem install clockwork, or specify it in your Gemfile), and then run with the clockwork binary:</p>
<pre><code>$ clockwork clock.rb
[2010-06-28 11:27:42 -0700] Starting clock for 2 events: [ apis.poll reports.email ]</code></pre>
<p>Or with Bundler: bundle exec clockwork clock.rb</p>
<p>More details about the use and operation of Clockwork can be found <a href='http://github.com/adamwiggins/clockwork#readme'>in the readme</a>.</p>
<h2>A Sample Application</h2>
<p>To illustrate what Clockwork would look like in a full application, I’ve written a sample app which fetches the Dow Jones index from Google Finance once every three minutes. The clock process enqueues the fetch job. The worker works the job, pulling down the index from the remote API, and storing the result in the database. The web app pulls from the database, showing the user all historic data points.</p>
<p>I wrote the same app with two web framework / database / queue combos, so pick the one that suits your style:</p>
<ul>
<li><a href='http://github.com/adamwiggins/clockwork-sinatra-beanstalk'>Clockwork in a Sinatra/MongoDB app with Beanstalk</a></li>
<li><a href='http://github.com/adamwiggins/clockwork-rails-dj'>Clockwork in a Rails 3.0b4/SQLite3 app with Delayed Job</a></li>
</ul>
<p>In both cases, the app has three processes: the web process (serving web requests to the user), the clock process (enqueuing jobs periodically), and the worker process (working the job to fetch data from the remote API and store it in the database).</p>
<p>I can’t overemphasize the importance of the clock process being separate from your worker process. The reason for this is that the clock is not <a href='http://adam.heroku.com/past/2009/7/6/sql_databases_dont_scale/'>horizontally scalable</a> (and doesn’t need to be); but your worker processes are fully parallelizable. In a real app, you’d run two, four, ten, or a hundred workers. You will only ever have one clock. The clock process can and must stay lightweight, doing no more than queueing jobs when the appropriate wall clock time is reached.</p>
<h2>Conclusion</h2>
<p>Replacing a tried-and-true tool like cron is not something to be undertaken lightly. However, after years of dissatisfaction with cron as a tool for app-level scheduling, I truly believe it’s time to try something different. I’ve been using Clockwork in a number of my own personal and work apps, and I’ve been very pleased with the results so far. Give it a try and tell me what you think.</p>Gluecon Slideshttp://adam.heroku.com/past/2010/5/29/gluecon_slides/2010-05-29T23:18:25-07:002010-05-29T23:18:25-07:00Adam Wiggins<div id='__ss_4353296' style='width:425px'><strong style='display:block;margin:12px 0 4px'><a href='http://www.slideshare.net/adamwiggins/cloud-services-4353296' title='Cloud Services'>Cloud Services</a></strong><object id='__sse4353296' height='355' width='425'><param name='movie' value='http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=cloudservices-100530011701-phpapp01&stripped_title=cloud-services-4353296' /><param name='allowFullScreen' value='true' /><param name='allowScriptAccess' value='always' /><embed name='__sse4353296' src='http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=cloudservices-100530011701-phpapp01&stripped_title=cloud-services-4353296' allowfullscreen='true' type='application/x-shockwave-flash' allowscriptaccess='always' height='355' width='425' /></object><div style='padding:5px 0 12px'>View more <a href='http://www.slideshare.net/'>presentations</a> from <a href='http://www.slideshare.net/adamwiggins'>Adam Wiggins</a>.</div></div><div id='__ss_4353296' style='width:425px'><strong style='display:block;margin:12px 0 4px'><a href='http://www.slideshare.net/adamwiggins/cloud-services-4353296' title='Cloud Services'>Cloud Services</a></strong><object id='__sse4353296' height='355' width='425'><param name='movie' value='http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=cloudservices-100530011701-phpapp01&stripped_title=cloud-services-4353296' /><param name='allowFullScreen' value='true' /><param name='allowScriptAccess' value='always' /><embed name='__sse4353296' src='http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=cloudservices-100530011701-phpapp01&stripped_title=cloud-services-4353296' allowfullscreen='true' type='application/x-shockwave-flash' allowscriptaccess='always' height='355' width='425' /></object><div style='padding:5px 0 12px'>View more <a href='http://www.slideshare.net/'>presentations</a> from <a href='http://www.slideshare.net/adamwiggins'>Adam Wiggins</a>.</div></div>Startup Lessons Learnedhttp://adam.heroku.com/past/2010/4/30/startup_lessons_learned/2010-04-30T12:19:55-07:002010-04-30T12:19:55-07:00Adam Wiggins<p>Like many folks in the startup crowd, I’m a reader of <a href='http://www.startuplessonslearned.com/'>Eric Ries' blog</a> (<a href='http://adam.blog.heroku.com/past/2009/2/5/stealth/'>some</a> <a href='http://adam.blog.heroku.com/past/2009/4/1/phases_of_a_companys_life/'>links</a>), and I’ve read Steve Blank’s <a href='http://books.google.com/books?id=oLL2pjn2RV0C'>Four Steps to the Epiphany</a>. What I didn’t know is that these guys have joined forces to build a movement they are calling “lean startups.” After attending the <a href='http://sllconf.com/'>Startup Lessons Learned conference</a> last week, I now believe this methodology is on its way to making a major impact on the world of entrepreneurship.</p><p>Like many folks in the startup crowd, I’m a reader of <a href='http://www.startuplessonslearned.com/'>Eric Ries' blog</a> (<a href='http://adam.blog.heroku.com/past/2009/2/5/stealth/'>some</a> <a href='http://adam.blog.heroku.com/past/2009/4/1/phases_of_a_companys_life/'>links</a>), and I’ve read Steve Blank’s <a href='http://books.google.com/books?id=oLL2pjn2RV0C'>Four Steps to the Epiphany</a>. What I didn’t know is that these guys have joined forces to build a movement they are calling “lean startups.” After attending the <a href='http://sllconf.com/'>Startup Lessons Learned conference</a> last week, I now believe this methodology is on its way to making a major impact on the world of entrepreneurship.</p>
<img src='http://www.sllconf.com/wp-content/themes/Social%20Gaming%20Summit/img/sll_logo1.png' style='margin-bottom: 6px' />
<p>Lean startup methodology has a lot in common with <a href='http://en.wikipedia.org/wiki/Agile_software_development'>agile</a>. But where agile applies to software, lean startups applies to customers and markets. Customer discovery, validation of markets, iteration on product, and intensive customer feedback are all part of the lean startup.</p>
<p>The energy at the conference reminded me of what Ruby conferences were like a few years ago. Charismatic, passionate, opinionated leaders draw together a crowd of strangers; and then those strangers look around to realize they are surrounded by people that share their passions. It’s the birth of community.</p>
<p>I took some notes during some of the talks. What follows are some of the quotes I jotted down, and some commentary.</p>
<h2>Randy Komisar on Pivots</h2>
<p>Randy Komisar wrote <a href='http://www.amazon.com/Getting-Plan-Breaking-Through-Business/dp/1422126692'>Getting to Plan B: Breaking Through to a Better Business Model</a>. His thesis is: your first idea never works, but that’s ok. What’s really important is getting to the next idea, and the next and the next, zeroing in on something that <i>will</i> work - and all of this as quickly and as cheaply as possible. Transitioning between plans is called a <b>pivot</b>, a word that was in heavy use by most of the speakers at the conference.</p>
<p>Some quotes from Komisar:</p>
<ul>
<li>“Plan A never works”</li>
<li>“‘Lean’ means get to the right answer with as little time and money as possible”</li>
<li>“I invest in people irrationally committed to a purpose” - Founders believe in a vision; maximizing their personal wealth is a side-effect, not a primary purpose. Being an entrepreneur is not a good way to make money, even though some people strike it rich.</li>
<li>“Leap of faith question” - The premise your startup is built on. What question can you ask, where the answer will make or break your business? For example, “People will pay more for outstanding design” might have been Apple’s leap of faith in the 2000s. “People will switch to using personal productivity software on the web” could have been 37Signals’ leap of faith.</li>
<li>“Once you decide to change, you will always wish you changed earlier”</li>
<li>“Everything is derivative - that’s not a bad thing. Steal liberally”</li>
<li>“We’ve got to zig and zag through the realities of the opportunities in front of us and the information they are giving us” - Founders aren’t founders because they know what to do. They’re founders because they can figure out what to do, quickly, in the face of rapidly changing information. This is why, for example, fixed business plans are of no use in a startup.</li>
</ul>
<p>During the discussion with Randy, Eric Ries used the term “success theater” to describe what happens in boardrooms when plan A starts to go south. Instead of admitting “what we’re doing isn’t working, we need to try something else,” founders dress up the trajectory of the business in false clothes. This doesn’t help anyone in the long term.</p>
<p>Pivots are what startups do. The sooner that investors, founders, early employees, and early customers come to grips with this, the less heartache needs to surround each pivot, and the quicker you can get to the right answer.</p>
<h2>Steve Blank on Entrepreneurship</h2>
<p>Much as I like Four Steps to the Epiphany, I’ve never gotten much value from <a href='http://steveblank.com/'>Steve Blank's blog</a> - so I wasn’t expecting much from his talk. To my surprise, I was absolutely riveted. While Eric Ries is the father of the lean startup movement, Steve Blank is a very active and hands-on grandfather. His presentation was both enlightening and inspiring.</p>
<p>There was so much good stuff in this talk it’s hard to capture it all. A few quotes:</p>
<ul>
<li>“A startup is a search for a scalable, repeatable business model”</li>
<li>“No business plan survives first contact with customers”</li>
<li>“Startups search and pivot. Large companies execute.”</li>
<li>“Founders make order from chaos”</li>
<li>“Lean startup is the first business methodology that is being crowdsourced and developed iteratively - we’re collectively getting smarter at a scary rate”</li>
<li>“My personal goal is to change the state of entrepreneurial education in the United States”</li>
<li>“In the 1950s, Venture Capital was called Adventure Capital”</li>
</ul>
<p>Blank lays out the lifecycle of a scalable startup in three phases: search, build, grow.</p>
<ul>
<li>Search - The one and only mission of the company in its early life is to search for a scalable business model. Nothing else matters. Small team, little to no management, very little of the formal trappings of a company. Staying lean, nimble, and chaotic is how you search rapidly. Formality and structure only slow you down.</li>
<li>Build - Once the business model is found (in technology, this usually comes in the form of a software product that people love and have demonstrated willingness to pay for) the company starts to build out. Here the team is expanding, infrastructure is being put in place, branding and market position clarified. The organization goes from feeling like a ragtag band of buddies working on something made out of passion and elbow grease, and to something that feels like a “real” company.</li>
<li>Growth - Everything is figured out, the company’s direction is decided: it’s now a matter of turning up the volume and continuing the business model on increasingly large scales. This is generally where the founders and many of the early employees of the company will make an exit. There are examples of founders who have stayed on through the final phase: Bill Gates, Steve Jobs, Larry Ellison. But these guys are the exception, not the rule (and that’s part of what they are famous). Founders need to be aware of, and prepared for, the likelihood that success means they have made themselves irrelevant in the organization they have built.</li>
</ul>
<h2>A Tale of Two Businessmen</h2>
<p>Blank closed with a fascinating story about two figures involved in the early life of General Motors. The first was <a href='http://en.wikipedia.org/wiki/Alfred_Sloan'>Alfred Sloan</a>. Sloan was the CEO of GM Motors in the early part of the 20th century. He’s widely recognized as the man that took GM to being the largest company in the world. <a href='http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=sloan+business+school'>Many business schools are named after him</a>, and his managerial style was considered to be a pioneering approach that defined the new business of the 20th century.</p>
<img src='http://upload.wikimedia.org/wikipedia/en/a/a7/Williamcrapodurant.jpg' style='float: right; border: 2px solid #777; margin-left: 12px; margin-bottom: 6px' />
<p>The other player in this story is virtually unknown: <a href='http://en.wikipedia.org/wiki/Billy_Durant'>Billy Durant</a>. Durant founded GM and took it up to $3.6 billion in revenue (that number is adjusted for today’s dollars, if I’m recalling correctly). He was then fired by the board of directors, and he left to found Chevrolet. He quickly grew <i>that</i> company until it was bigger than GM, and then he bought GM. This guy was the Steve Jobs of his day - why don’t we remember him?</p>
<p>The answer is that the last century of business education has focused almost entirely on the last stage of a company’s life. Business degrees are MBAs, which Blank cautions are useless or perhaps even harmful in the early life of a startup. (MBAs working at a startup will try to apply their knowledge, creating structure and formality at a time when that’s the worst possible thing you can do.) Blank feels that entrepreneurial education should be separate from business education - B-schools can give out MBAs, and E-schools should give out MEAs.</p>
<p>He argues we’ve seen the first glimpse of this in the past several years, pioneered by <a href='http://en.wikipedia.org/wiki/Y_Combinator'>Y Combinator</a>. Blank points out that there are now over 100 (!) YC clones in operation, proof of the huge thirst for startup-focused education. He has a goal of bringing this entrepreneurial education into a more academic setting as well.</p>
<p>While he hasn’t done this yet (though he sounds quite serious about it), he offers up a small bit of entertainment to tide us over: <a href='http://www.cafepress.com/durantschool'>the Durant School of Entrepreneurship</a>, available in T-shirt form.</p>Beanstalk, a Simple and Fast Queueing Backendhttp://adam.heroku.com/past/2010/4/24/beanstalk_a_simple_and_fast_queueing_backend/2010-04-24T14:08:37-07:002010-04-24T14:08:37-07:00Adam Wiggins<p>Web apps are increasingly focused on background jobs. In fact, the term “background job” almost seems inaccurate - the heavy lifting done by worker processes is often the meat of the app’s purpose. The web portion of the app, by comparison, does only the relatively lightweight work of putting job requests into queues, and later presenting the results of jobs as HTML or JSON.</p><p>Web apps are increasingly focused on background jobs. In fact, the term “background job” almost seems inaccurate - the heavy lifting done by worker processes is often the meat of the app’s purpose. The web portion of the app, by comparison, does only the relatively lightweight work of putting job requests into queues, and later presenting the results of jobs as HTML or JSON.</p>
<p>I’ve previously written about <a href='http://adam.blog.heroku.com/past/2009/4/14/building_a_queuebacked_feed_reader_part_1/'>queueing via Delayed Job</a>. DJ uses your database as its backend, which is a great way to start, but doesn’t scale well.</p>
<p>I’ve also described <a href='http://adam.blog.heroku.com/past/2009/9/28/background_jobs_with_rabbitmq_and_minion/'>Minion backed by RabbitMQ</a> for a more robust queueing solution. While I love <a href='http://github.com/orionz/minion'>Minion</a>’s simple jobs DSL, RabbitMQ can feel like overkill for apps that aren’t huge distributed systems. AMQP is a complex protocol with lots of capabilities outside the scope of job queueing. These capabilities become dead weight for most apps, which only need a way to enqueue and work jobs. I find this especially poignant when I’m building an app that uses Sinatra, Redis, and Memcache. RabbitMQ’s ponderous footprint doesn’t fit in with these nimble backend daemons.</p>
<img src='http://hirodusk.s3.amazonaws.com/beanstalk.png' style='float: right; margin-left: 8px; margin-bottom: 3px' /><h2>Discovering Beanstalk</h2>
<p><a href='http://www.igvita.com/'>Ilya Grigorik</a> pointed me toward <a href='http://kr.github.com/beanstalkd/'>Beanstalk</a>, a job queueing backend inspired by Memcache. It’s simple, lightweight, and completely specialized on job queueing. They use it at <a href='http://www.postrank.com/'>PostRank</a> to process millions of jobs a day, so it does perform at scale.</p>
<p>I’ve found Beanstalk to be a joy to use. The difference between RabbitMQ and Beanstalk reminds me of the difference between Apache and Nginx, or between Squid and Varnish. It gives 80% the functionality with 20% the weight and complexity. The authors have definitely achieved their goal of making a job queueing backend which has the same clean simplicity as memcached.</p>
<h2>Installation</h2>
<p>On Mac OS X, install Beanstalkd like this:</p>
<code><pre><span class="global">$ </span><span class="ident">sudo</span> <span class="ident">port</span> <span class="ident">install</span> <span class="ident">beanstalkd</span>
</pre></code>
<p>(Or build <a href='http://xph.us/dist/beanstalkd/beanstalkd-1.4.4.tar.gz'>from source</a>.)</p>
<p>Running it couldn’t be simpler:</p>
<code><pre><span class="global">$ </span><span class="ident">beanstalkd</span>
</pre></code>
<h2>Stalker, a Minion-like Job Queueing DSL</h2>
<p>The <a href='http://beanstalk.rubyforge.org/'>Ruby beanstalk client</a> is extremely simple - put a string onto a queue, pull it off later. This is great, but it’s just a smidge too unstructured for my taste. So I wrote <a href='http://github.com/adamwiggins/stalker'>Stalker</a>, a DSL almost identical to Minion, but for Beanstalk.</p>
<p>Enqueue jobs like so:</p>
<code><pre><span class="constant">Stalker</span><span class="punct">.</span><span class="ident">enqueue</span><span class="punct">('</span><span class="string">email.send</span><span class="punct">',</span> <span class="symbol">:email</span> <span class="punct">=></span> <span class="punct">'</span><span class="string">joe@example.com</span><span class="punct">')</span>
</pre></code>
<p>In a jobs.rb file, define a how to work each job:</p>
<code><pre><span class="ident">include</span> <span class="constant">Stalker</span>
<span class="ident">job</span> <span class="punct">'</span><span class="string">email.send</span><span class="punct">'</span> <span class="keyword">do</span> <span class="punct">|</span><span class="ident">args</span><span class="punct">|</span>
<span class="constant">Pony</span><span class="punct">.</span><span class="ident">email</span><span class="punct">(</span><span class="symbol">:to</span> <span class="punct">=></span> <span class="ident">args</span><span class="punct">['</span><span class="string">email</span><span class="punct">'],</span> <span class="symbol">:subject</span> <span class="punct">=></span> <span class="punct">"</span><span class="string">Hello there!</span><span class="punct">")</span>
<span class="keyword">end</span>
</pre></code>
<p>Now you can run one or more worker processes to work your jobs. Stalker includes a handy binary:</p>
<code><pre><span class="global">$ </span><span class="ident">stalk</span> <span class="ident">jobs</span><span class="punct">.</span><span class="ident">rb</span>
<span class="punct">[</span><span class="constant">Sat</span> <span class="constant">Apr</span> <span class="number">17</span> <span class="number">14</span><span class="punct">:</span><span class="number">13</span><span class="punct">:</span><span class="number">40</span> <span class="punct">-</span><span class="number">0700</span> <span class="number">2010</span><span class="punct">]</span> <span class="constant">Working</span> <span class="number">3</span> <span class="ident">jobs</span> <span class="punct">::</span> <span class="punct">[</span> <span class="ident">email</span><span class="punct">.</span><span class="ident">send</span> <span class="ident">twitter</span><span class="punct">.</span><span class="ident">post</span> <span class="ident">image</span><span class="punct">.</span><span class="ident">resize</span> <span class="punct">]</span>
</pre></code>
<p>By default, it will work all jobs you’ve defined. But you can also filter it down to a list by specifying job names on the command line:</p>
<code><pre><span class="global">$ </span><span class="ident">stalk</span> <span class="ident">jobs</span><span class="punct">.</span><span class="ident">rb</span> <span class="ident">email</span><span class="punct">.</span><span class="ident">send</span><span class="punct">,</span><span class="ident">twitter</span><span class="punct">.</span><span class="ident">post</span>
<span class="punct">[</span><span class="constant">Sat</span> <span class="constant">Apr</span> <span class="number">17</span> <span class="number">14</span><span class="punct">:</span><span class="number">13</span><span class="punct">:</span><span class="number">40</span> <span class="punct">-</span><span class="number">0700</span> <span class="number">2010</span><span class="punct">]</span> <span class="constant">Working</span> <span class="number">2</span> <span class="ident">jobs</span> <span class="punct">::</span> <span class="punct">[</span> <span class="ident">email</span><span class="punct">.</span><span class="ident">send</span> <span class="ident">twitter</span><span class="punct">.</span><span class="ident">post</span> <span class="punct">]</span>
</pre></code>
<p>This will allow you to run one pool of workers for fast or high-priority jobs, and another pool for general work.</p>
<h2>Features for Job Queueing</h2>
<p>Though lightweight, Beanstalk’s laser-sharp focus on its singular purpose of job queueing allows it to deliver many features extremely useful for that purpose. For example:</p>
<ul>
<li>Priorities - Give a number from 0 to 1000 when queueing a job and it will jump ahead of all jobs already enqueued with a higher number.</li>
<li>Persistence - Although beanstalkd stores its jobs in memory for speed and simplicity (ala memcached or redis-server), it can also save its state to a file so that you can cycle the beanstalkd process without losing any jobs.</li>
<li>Federation - Fault-tolerance and horizontal scalability is provided the same way as Memcache - through federation by the client. Take a look at <a href='http://github.com/kr/beanstalk-client-ruby/blob/master/lib/beanstalk-client/connection.rb#L283-309'>how the Ruby client handles multiple beanstalkd servers</a>, it’s really quite clever.</li>
<li>Buried jobs - When a job causes an error, you can bury it. This keeps it around for later introspection and debugging (or even re-running it), while keeping it separated from active jobs.</li>
<li>Timeouts - The default behavior for jobs not acknowledged by a client (by deleting it when finished) to re-queue. This prevents failed jobs (particularly from a client that loses its connection partway through the job) from getting lost, the same purpose of <a href='http://github.com/orionz/minion/blob/master/lib/minion.rb#L66'>ack</a> in AMQP. Delayed Job uses its locked_at and locked_by fields for this purpose, but it’s very easy for a worker which doesn’t exit cleanly to leave jobs in a jammed/stuck state. Beanstalk’s reserve, work, delete cycle, with a timeout to dereserve the job, means it’s impossible for a bad client to prevent a job from completing.</li>
</ul>
<p>Beanstalk’s features are described in more detail on the <a href='http://wiki.github.com/kr/beanstalkd/faq'>FAQ</a>.</p>
<h2>Performance</h2>
<p>Beanstalk feels very snappy overall. I ran some off-the-cuff benchmarks against a handful of Ruby-friendly queueing systems on my laptop, and here were my results:</p>
<table>
<tr><th /><th>enqueue</th><th>work</th></tr>
<tr><th>delayed job</th><td>200 jobs/sec</td><td>120 jobs/sec</td></tr>
<tr><th>resque</th><td>3800 jobs/sec</td><td>300 jobs/sec</td></tr>
<tr><th>rabbitmq</th><td>2500 jobs/sec</td><td>1300 jobs/sec</td></tr>
<tr><th>beanstalk</th><td>9000 jobs/sec</td><td>5200 jobs/sec</td></tr>
</table>
<p>Don’t take these numbers too seriously, as I didn’t make any attempt to be rigorous or simulate real-world conditions. But they do give some quantitative support to my sense that Beanstalk is smokin’ fast.</p>
<h2>Wrapup</h2>
<p><a href='http://github.com/adamwiggins/qfeedreader/commit/f443f03e43719bc3521c459f7ea32c18b7dcf855'>A port of QFeedreader to Stalker</a> requires only a few lines of code changed, but we get to cut out a tons dependency gems required for the AMQP backend. Judged by weight of dependencies removed, switching to Beanstalk/Stalker looks favorable.</p>
<p>One thing still lacking in the Beanstalk community are good introspection tools - something that, so far, only Resque has made much progress on. <a href='http://github.com/dustin/beanstalk-tools'>Some command-line tools exist</a>, which indicate that the Beanstalk protocol has all the introspection capabilities necessary. So building a user-friendly interface introspection interface (command line or web) seems entirely possible.</p>
<p>Another thing missing from Beanstalk is authentication. The authors probably assume that you’re running in a traditional environment with IP/firewall-based access control, but this doesn’t jive with cloud environments. Memcached recently added <a href='http://code.google.com/p/memcached/wiki/SASLAuthProtocol'>SASL</a> to solve this. <a href='http://groups.google.com/group/beanstalk-talk/browse_thread/thread/773ccde9f7b927'>I asked about this on the mailing list</a> and it seems the Beanstalk author(s) are open to this possibility.</p>
<p>Lastly, I note that right now the only queueing system available as a service is Amazon SQS. Beanstalk would be make a beautiful multitenant cloud service - very similar to the way <a href='http://mongohq.com/'>MongoHQ</a> is running MongoDB as a service. I sense there is a great opportunity here for someone to found a Beanstalk-as-a-service startup.</p>Rethinking Cronhttp://adam.heroku.com/past/2010/4/13/rethinking_cron/2010-04-13T15:42:56-07:002010-04-13T15:42:56-07:00Adam Wiggins<p>Cron is a trusty tool in the unix toolbox for scheduling work to run at periodic intervals. In addition to system tasks, it’s common for app developers to use an app-specific crontab to run application tasks. For example, if your app is a feed reader, you might use a cronjob to fetch new feeds every three hours, and another cronjob to clean out old unread articles every night.</p><p>Cron is a trusty tool in the unix toolbox for scheduling work to run at periodic intervals. In addition to system tasks, it’s common for app developers to use an app-specific crontab to run application tasks. For example, if your app is a feed reader, you might use a cronjob to fetch new feeds every three hours, and another cronjob to clean out old unread articles every night.</p>
<h2>Cron Weaknesses</h2>
<p><a href='http://commons.wikimedia.org/wiki/File:Zytglogge_clockface_detail.jpg'><img src='http://hirodusk.s3.amazonaws.com/clock.png' style='float: right; margin-left: 16px; margin-bottom: 6px; border: 2px solid #777' /></a></p>
<p>While application crontabs have served us well enough, this technique has a number of weaknesses.</p>
<p>One problem is that cron is per-machine, so once you scale to multiple app servers you’ll need locks stored in a shared location (database or memcache) to avoid scheduling the same job twice. Locks require maintenance on those locks - cleaning up stale locks from cronjobs that exited abnormally or got stuck in an infinite loop. What was a one-line cronjob can quickly balloon into a whole mess of pidfiles, locks, and cleanup code.</p>
<p>Cron problems are difficult to debug. The arcane syntax of crontab is terse to the point of near inscrutability, making it easy to <a href='http://techno-weenie.net/2009/3/15/wtf-does-that-cron-do'>accidentally schedule jobs at the wrong time</a>. And the subtle differences between a cronjob’s shell environment and your command prompt’s shell environment can be maddening. Lack of feedback makes these or any other problem with your cronjobs difficult to diagnose.</p>
<p>Lastly, cronjobs have a tendency to be turn into a kind of poor-man’s background job solution. Check the crontab for any reasonably complex application and there’s a good chance you’ll see a one minute or five minute cronjob which looks in the database for work to be done. This can almost always be better done with a job queueing + workers system. Cron is for scheduling things, not doing them.</p>
<p>While cron will remain the ideal solution for system tasks like log rotation for some time to come, the above problems with application use of cron suggest that it might be time for a new scheduling solution for apps.</p>
<h2>Cron Replacement Wishlist</h2>
<p>My wishlist for a new app scheduling solution is:</p>
<ul>
<li>Powerful and human-friendly syntax</li>
<li>Easy to test</li>
<li>Visibility</li>
<li>No difference between scheduler environment and one-off / test environment</li>
<li>Encourage use of a queueing system rather than doing the work directly in the scheduler</li>
<li>Scales without use of locks</li>
</ul>
<p>Recently, the <a href='http://flightcaster.com/'>Flightcaster</a> guys introduced me to <a href='http://github.com/bvandenbos/resque-scheduler'>resque-scheduler</a>. With resque-scheduler, you make a yaml file of jobs to be scheduled. When each time specified is reached, the job will be queued via the <a href='http://github.com/defunkt/resque'>Resque job queueing system</a>.</p>
<p>What’s most interesting to me is that redis-scheduler runs in a standalone, long-running daemon process. Launch it like this:</p>
<code><pre><span class="global">$ </span><span class="ident">rake</span> <span class="ident">resque</span><span class="symbol">:scheduler</span>
</pre></code>
<p>The standalone process is an fascinating solution to the locks problem. Because there’s only one process, you don’t need any locks - an approach that sounds strikingly similar to the reasons for using <a href='http://adamblog.heroku.com/past/2009/8/13/threads_suck/'>async</a>. A data format (yaml) rather than code prevents you from doing any work in the scheduler, since you can only specify the name of a job to queue. This enforces that the work will be done in the background workers, where they belong. Since the scheduler process does no heavy lifting, there are no scalability issues.</p>
<p>For diagnostic/debug visibility, set up logging and exception handling (e.g. Exceptional, Hoptoad) exactly like you would for your web or worker processes. resque-scheduler also provides some extensions to the Resque web UI (<a href='http://github.com/bvandenbos/resque-scheduler'>screenshots at the bottom of this page</a>) for additional visibility and control.</p>
<h2>Generalizing the Single-Process Scheduler</h2>
<p>Resque-scheduler still uses a cron-style syntax for specifying when jobs will run; and Resque is not my favorite queueing system anyway (I prefer dedicated MQ backends like RabbitMQ, Kestrel, and Beanstalk). But the single-process scheduler idea implemented by resque-scheduler can easily be applied to other queueing systems. For example, you could use <a href='http://github.com/jmettraux/rufus-scheduler'>rufus-scheduler</a> in combination with <a href='http://adamblog.heroku.com/past/2009/9/28/background_jobs_with_rabbitmq_and_minion/'>Minion+RabbitMQ</a> to write a scheduler process for your app. In a file called scheduler.rb:</p>
<code><pre><span class="ident">require</span> <span class="punct">'</span><span class="string">rufus/scheduler</span><span class="punct">'</span>
<span class="ident">require</span> <span class="punct">'</span><span class="string">minion</span><span class="punct">'</span>
<span class="ident">scheduler</span> <span class="punct">=</span> <span class="constant">Rufus</span><span class="punct">::</span><span class="constant">Scheduler</span><span class="punct">.</span><span class="ident">start_new</span>
<span class="ident">scheduler</span><span class="punct">.</span><span class="ident">every</span> <span class="punct">'</span><span class="string">5m</span><span class="punct">'</span> <span class="punct">{</span> <span class="constant">Minion</span><span class="punct">.</span><span class="ident">enqueue</span><span class="punct">('</span><span class="string">twitter.refresh</span><span class="punct">')</span> <span class="punct">}</span>
<span class="ident">scheduler</span><span class="punct">.</span><span class="ident">every</span> <span class="punct">'</span><span class="string">3h</span><span class="punct">'</span> <span class="punct">{</span> <span class="constant">Minion</span><span class="punct">.</span><span class="ident">enqueue</span><span class="punct">('</span><span class="string">feeds.refresh</span><span class="punct">')</span> <span class="punct">}</span>
<span class="ident">scheduler</span><span class="punct">.</span><span class="ident">join</span>
</pre></code>
<p>You’ve probably already defined or documented somewhere a list of processes needed to run your app. This may be one or more web processes (mongrel_cluster start, thin start, or unicorn start) and one or more worker processes (rake jobs:work, rake resque:work, or ruby minion.rb). Add to this list your new scheduler process:</p>
<code><pre><span class="ident">ruby</span> <span class="ident">scheduler</span><span class="punct">.</span><span class="ident">rb</span> <span class="punct">>></span> <span class="ident">log</span><span class="punct">/</span><span class="ident">scheduler</span><span class="punct">.</span><span class="ident">log</span> <span class="number">2</span><span class="punct">>&</span><span class="number">1</span>
</pre></code>
<h2>Conclusion</h2>
<p>While the single-process scheduler approach is still in its infancy, I believe it bears strong potential for the future of application cron.</p>URLs are the Uniform Way to Locate Resourceshttp://adam.heroku.com/past/2010/3/30/urls_are_the_uniform_way_to_locate_resources/2010-03-30T16:06:38-07:002010-03-30T16:06:38-07:00Adam Wiggins<p>When you hear the term URL, what do you think of? Probably a web address - e.g., a publicly accessible HTML page such as http://google.com/ or http://news.ycombinator.com/. But URLs have a much wider application.</p><p>When you hear the term URL, what do you think of? Probably a web address - e.g., a publicly accessible HTML page such as http://google.com/ or http://news.ycombinator.com/. But URLs have a much wider application.</p>
<p>URL stands for Uniform Resource Locator. Decoding this, a URL is a uniform (standard) way to locate (find) any resource (service) over a network (the internet or a LAN).</p>
<p>Any time you wish to locate a resource on the internet, use a URL.</p>
<h2>Example: Git</h2>
<p>If you use Git, then you’ve probably already encountered a non-HTTP URL: the Git protocol. For example, here’s the URL to the public Git repo for the Paperclip file attachment library:</p>
<code><pre><span class="ident">git</span><span class="punct">:/</span><span class="regex"></span><span class="punct">/</span><span class="ident">github</span><span class="punct">.</span><span class="ident">com</span><span class="punct">/</span><span class="ident">thoughtbot</span><span class="punct">/</span><span class="ident">paperclip</span><span class="punct">.</span><span class="ident">git</span>
</pre></code>
<p>A Git repo is not an HTML page, but it is a resource on a network, so using a URL makes perfect sense.</p>
<p>You could potentially encode this repo’s location in another way. For example, you could break it out into pieces and provide it in a JSON file:</p>
<code><pre><span class="punct">{</span>
<span class="punct">"</span><span class="string">protocol</span><span class="punct">":</span> <span class="punct">"</span><span class="string">git</span><span class="punct">",</span>
<span class="punct">"</span><span class="string">host</span><span class="punct">":</span> <span class="punct">"</span><span class="string">github.com</span><span class="punct">",</span>
<span class="punct">"</span><span class="string">username</span><span class="punct">":</span> <span class="punct">"</span><span class="string">thoughtbot</span><span class="punct">",</span>
<span class="punct">"</span><span class="string">project</span><span class="punct">":</span> <span class="punct">"</span><span class="string">paperclip</span><span class="punct">"</span>
<span class="punct">}</span>
</pre></code>
<p>Why don’t we use this format for locating Git resources? There are a few potential answers, such as the convenience of being able to easily cut-and-paste the location into a command line tool or a URL bar. But the best answer is that our ad-hoc JSON format is not <strong>uniform</strong>. The JSON above would work for locating Git resources on Github, but nowhere else. URLs are standard and uniform.</p>
<h2>Example: Databases</h2>
<p>Another great example is the location of a database. One approach is to have a long list of configuration values, probably copied into a file like config/database.yml by hand, one at a time. This format is probably specific to your ORM, e.g. not standard or uniform in any way. It’s the equivalent of the JSON address we used to specify a Git repo in the previous section.</p>
<p>Just like Git, the more elegant approach is to put everything needed to locate the database into a URL. This will typically look something like:</p>
<code><pre><span class="ident">mysql</span><span class="punct">:/</span><span class="regex"></span><span class="punct">/</span><span class="ident">myuser</span><span class="symbol">:mypass@db8</span><span class="punct">.</span><span class="ident">myhost</span><span class="punct">.</span><span class="ident">com</span><span class="punct">:</span><span class="number">3306</span><span class="punct">/</span><span class="ident">mydatabase</span>
</pre></code>
<p>Ruby ORMs like Sequel and DataMapper use this very method. This makes configuring your database very simple:</p>
<code><pre><span class="constant">Sequel</span><span class="punct">.</span><span class="ident">connect</span><span class="punct">(</span><span class="ident">the_database_url</span><span class="punct">)</span>
</pre></code>
<p>Beautiful.</p>
<h2>Yet More Examples: RabbitMQ, Email, Memcache</h2>
<p>What else can we use URLs for? Anything that needs to be located on a network, be it the internet or a local network. For example, how about your RabbitMQ message queue?</p>
<code><pre><span class="ident">amqp</span><span class="punct">:/</span><span class="regex"></span><span class="punct">/</span><span class="ident">user</span><span class="symbol">:pass@hostname</span><span class="punct">/</span><span class="ident">vhost</span>
</pre></code>
<p>Or your SMTP mail server?</p>
<code><pre><span class="ident">smtp</span><span class="punct">:/</span><span class="regex"></span><span class="punct">/</span><span class="ident">user</span><span class="symbol">:pass@hostname</span><span class="punct">/</span><span class="ident">domain</span>
</pre></code>
<p>Or your Memcache server?</p>
<code><pre><span class="ident">memcache</span><span class="punct">:/</span><span class="regex"></span><span class="punct">/</span><span class="ident">hostname</span><span class="punct">/</span><span class="ident">prefix</span>
</pre></code>
<p>On this last item, you might point out that a Memcache cluster often has multiple hosts. Typically, these are specified in an array of IP addresses passed to the client object constructor. While this works, it’s not uniform. A better solution here is to use an internal hostname (such as memcache.internal.yourhost.com) which returns multiple A records, one per server in your cluster. The returned IPs may well be 10. or 192. addresses, not publicly addressable. In addition to allowing your memcache config to conform to the URL specification, this also gives the benefit of managing your server IPs in a single place, DNS. The alternative is hardcoding IPs into every component of your system that uses your memcache servers.</p>
<h2>What About Extra Config Options?</h2>
<p>If the protocol for a given resource requires additional config options, you can pass them as query parameters:</p>
<code><pre><span class="ident">sqlite</span><span class="punct">:/</span><span class="regex"></span><span class="punct">/</span><span class="ident">development</span><span class="punct">.</span><span class="ident">sqlite3?encoding</span><span class="punct">=</span><span class="ident">utf8</span>
</pre></code>
<p>I would urge you to think carefully before using query params. 99% of cases should be representable within the base URL.</p>
<h2>Summary</h2>
<p>URLs are uniform. Use them to locate your resources.</p>Value-Creating Activitieshttp://adam.heroku.com/past/2010/3/22/valuecreating_activities/2010-03-22T01:13:42-07:002010-03-22T01:13:42-07:00Adam Wiggins<blockquote>
<p>Inspired by the lean manufacturing revolution (and excellent books like Lean Thinking), I started with a first fundamental question: in a startup, what activities are value-creating and which are waste? Usually, new projects are measured and held accountable to milestones and deadlines. When a project is on track, on time, and on budget, our intuition is that it is being well managed. This intuition is dead wrong.</p>
</blockquote><blockquote>
<p>Inspired by the lean manufacturing revolution (and excellent books like Lean Thinking), I started with a first fundamental question: in a startup, what activities are value-creating and which are waste? Usually, new projects are measured and held accountable to milestones and deadlines. When a project is on track, on time, and on budget, our intuition is that it is being well managed. This intuition is dead wrong.</p>
</blockquote>
<p>From <a href='http://blogs.hbr.org/cs/2010/01/is_entrepreneurship_a_manageme.html'>Is Entrepreneurship a Management Science?</a> by Eric Ries</p>Consuming the Twitter Streaming APIhttp://adam.heroku.com/past/2010/3/19/consuming_the_twitter_streaming_api/2010-03-19T11:01:54-07:002010-03-19T11:01:54-07:00Adam Wiggins<p>If you’ve been using polling to track Twitter search terms <a href='http://search.twitter.com/search?q=heroku'>(totally random example)</a>, you may have wondered if there is a more efficient and reliable method. The <a href='http://apiwiki.twitter.com/Streaming-API-Documentation'>Twitter streaming API</a> is a potential solution.</p><p>If you’ve been using polling to track Twitter search terms <a href='http://search.twitter.com/search?q=heroku'>(totally random example)</a>, you may have wondered if there is a more efficient and reliable method. The <a href='http://apiwiki.twitter.com/Streaming-API-Documentation'>Twitter streaming API</a> is a potential solution.</p>
<p>Try out the sample stream with curl:</p>
<code><pre><span class="global">$ </span><span class="ident">curl</span> <span class="ident">http</span><span class="punct">:/</span><span class="regex"></span><span class="punct">/</span><span class="ident">stream</span><span class="punct">.</span><span class="ident">twitter</span><span class="punct">.</span><span class="ident">com</span><span class="punct">/</span><span class="number">1</span><span class="punct">/</span><span class="ident">statuses</span><span class="punct">/</span><span class="ident">sample</span><span class="punct">.</span><span class="ident">json</span> <span class="punct">-</span><span class="ident">uYOUR_TWITTER_USERNAME</span><span class="symbol">:YOUR_PASSWORD</span>
</pre></code>
<p>Track a term in realtime, like “ruby”:</p>
<code><pre><span class="global">$ </span><span class="ident">curl</span> <span class="ident">http</span><span class="punct">:/</span><span class="regex"></span><span class="punct">/</span><span class="ident">stream</span><span class="punct">.</span><span class="ident">twitter</span><span class="punct">.</span><span class="ident">com</span><span class="punct">/</span><span class="number">1</span><span class="punct">/</span><span class="ident">statuses</span><span class="punct">/</span><span class="ident">filter</span><span class="punct">.</span><span class="ident">json?track</span><span class="punct">=</span><span class="ident">ruby</span> <span class="punct">-</span><span class="ident">uYOUR_TWITTER_USERNAME</span><span class="symbol">:YOUR_PASSWORD</span>
</pre></code>
<p>How do you integrate this into a Ruby app? Standard HTTP clients such as RestClient and HTTParty aren’t appropriate, since they’re designed for atomic HTTP requests, not streaming. With this API, you want to keep the socket open indefinitely, decoding JSON one line at a time.</p>
<p>Async I/O is the right tool for this job. Here’s an example script using Ilya Grigorik’s <a href='http://github.com/igrigorik/em-http-request'>evented HTTP client</a>. Install the em-http-request gem, then:</p>
<code><pre><span class="ident">require</span> <span class="punct">'</span><span class="string">eventmachine</span><span class="punct">'</span>
<span class="ident">require</span> <span class="punct">'</span><span class="string">em-http</span><span class="punct">'</span>
<span class="ident">require</span> <span class="punct">'</span><span class="string">json</span><span class="punct">'</span>
<span class="ident">usage</span> <span class="punct">=</span> <span class="punct">"</span><span class="string"><span class="expr">#{$0}</span> <user> <password></span><span class="punct">"</span>
<span class="ident">abort</span> <span class="ident">usage</span> <span class="keyword">unless</span> <span class="ident">user</span> <span class="punct">=</span> <span class="constant">ARGV</span><span class="punct">.</span><span class="ident">shift</span>
<span class="ident">abort</span> <span class="ident">usage</span> <span class="keyword">unless</span> <span class="ident">password</span> <span class="punct">=</span> <span class="constant">ARGV</span><span class="punct">.</span><span class="ident">shift</span>
<span class="ident">url</span> <span class="punct">=</span> <span class="punct">'</span><span class="string">http://stream.twitter.com/1/statuses/sample.json</span><span class="punct">'</span>
<span class="keyword">def </span><span class="method">handle_tweet</span><span class="punct">(</span><span class="ident">tweet</span><span class="punct">)</span>
<span class="keyword">return</span> <span class="keyword">unless</span> <span class="ident">tweet</span><span class="punct">['</span><span class="string">text</span><span class="punct">']</span>
<span class="ident">puts</span> <span class="punct">"</span><span class="string"><span class="expr">#{tweet['user']['screen_name']}</span>: <span class="expr">#{tweet['text']}</span></span><span class="punct">"</span>
<span class="keyword">end</span>
<span class="constant">EventMachine</span><span class="punct">.</span><span class="ident">run</span> <span class="keyword">do</span>
<span class="ident">http</span> <span class="punct">=</span> <span class="constant">EventMachine</span><span class="punct">::</span><span class="constant">HttpRequest</span><span class="punct">.</span><span class="ident">new</span><span class="punct">(</span><span class="ident">url</span><span class="punct">).</span><span class="ident">get</span> <span class="symbol">:head</span> <span class="punct">=></span> <span class="punct">{</span> <span class="punct">'</span><span class="string">Authorization</span><span class="punct">'</span> <span class="punct">=></span> <span class="punct">[</span> <span class="ident">user</span><span class="punct">,</span> <span class="ident">password</span> <span class="punct">]</span> <span class="punct">}</span>
<span class="ident">buffer</span> <span class="punct">=</span> <span class="punct">"</span><span class="string"></span><span class="punct">"</span>
<span class="ident">http</span><span class="punct">.</span><span class="ident">stream</span> <span class="keyword">do</span> <span class="punct">|</span><span class="ident">chunk</span><span class="punct">|</span>
<span class="ident">buffer</span> <span class="punct">+=</span> <span class="ident">chunk</span>
<span class="keyword">while</span> <span class="ident">line</span> <span class="punct">=</span> <span class="ident">buffer</span><span class="punct">.</span><span class="ident">slice!</span><span class="punct">(/</span><span class="regex">.+<span class="escape">\r</span>?<span class="escape">\n</span></span><span class="punct">/)</span>
<span class="ident">handle_tweet</span> <span class="constant">JSON</span><span class="punct">.</span><span class="ident">parse</span><span class="punct">(</span><span class="ident">line</span><span class="punct">)</span>
<span class="keyword">end</span>
<span class="keyword">end</span>
<span class="keyword">end</span>
</pre></code>
<p>Run this at the command line with your Twitter username and password as arguments, and it will start printing out results. In a real app, you’d replace the body of handle_tweet with code to do something like inserting the result into your database.</p>
<p>Note that, even in a production app, you should never run more than one of these processes. It’s a background worker of sorts; you can think of the open socket as a queue that’s delivering jobs. But since this queue can’t split the work among multiple workers, you’re limited to just one.</p>Alumnihttp://adam.heroku.com/past/2010/3/18/alumni/2010-03-18T13:54:05-07:002010-03-18T13:54:05-07:00Adam Wiggins<blockquote>
<p>A company with a culture of quitting does not have ex-employees; they have alumni. This is far more than a semantic distinction. An alumni relationship is positive; something that people can take pride in; and one that keeps the door open for further opportunities on both ends.</p>
</blockquote><blockquote>
<p>A company with a culture of quitting does not have ex-employees; they have alumni. This is far more than a semantic distinction. An alumni relationship is positive; something that people can take pride in; and one that keeps the door open for further opportunities on both ends.</p>
</blockquote>
<p>From <a href='http://thedailywtf.com/Articles/Up-or-Out-Solving-the-IT-Turnover-Crisis.aspx?'>Up or Out: Solving the IT Turnover Crisis</a> by Alex Papadimoulis</p>Salivation, Espresso Machines, and Tearshttp://adam.heroku.com/past/2010/3/17/salivation_espresso_machines_and_tears/2010-03-17T20:39:11-07:002010-03-17T20:39:11-07:00Adam Wiggins<p>Normally I’m not much for farewell posts (they’re metaposts, which I don’t like in general), but <a href='http://joelonsoftware.com/items/2010/03/14.html'>Joel Spolsky's pseudo-retirement</a> shows a self-aware sense of humor that I respect:</p><p>Normally I’m not much for farewell posts (they’re metaposts, which I don’t like in general), but <a href='http://joelonsoftware.com/items/2010/03/14.html'>Joel Spolsky's pseudo-retirement</a> shows a self-aware sense of humor that I respect:</p>
<blockquote>
<p>What I am stopping is the traditional opinionated essay that has characterized Joel on Software for a decade. I’m not going to write Ten Ways to Get VCs to Salivate, I’m not going to write Why You Have To Buy a $10,000 Italian Espresso Machine for your Programmers, and I’m not going to write Python is For Aspergers Geeks or Ruby is for Tear-streaked Emo Teenagers. After a decade of this, the whole genre of Hacker News fodder is just too boring to me personally. It’s still a great format… the rest of you, knock yourselves out… I just can’t keep doing that particular thing.</p>
</blockquote>Graph Databaseshttp://adam.heroku.com/past/2010/3/15/graph_databases/2010-03-15T21:30:15-07:002010-03-15T21:30:15-07:00Adam Wiggins<p>Graph databases are a type of datastore which treat the relationship between things as equally important to the things themselves. Examples of datasets that are natural fits for graph databases:</p><p>Graph databases are a type of datastore which treat the relationship between things as equally important to the things themselves. Examples of datasets that are natural fits for graph databases:</p>
<ul>
<li>Friend links on a social network</li>
<li>“People who bought this also bought…” Amazon-style recommendation engines</li>
<li>The world wide web</li>
</ul>
<p>In graph database parlance, a thing (a person, a book, a website) is referred to as a “node,” while a relationship between two things (a friendship, a related book, an href) is referred to as an “edge.”</p>
<p>In most types of databases, the records stored in the database are nodes, and edges (relationships) are derived from a field on a node. In a SQL database, for example, you might have a table called “people” that includes a field “friend_id.” friend_id is a reference to another record in the people table.</p>
<p>The weakness with reference fields becomes apparent as soon as you want to do many-to-many relationships, or store data about the relationship. A person can have many friends; and you might want to track the date the friendship link was created, or whether the two people are married.</p>
<p>The solution to this in a SQL database is a join table. In the people/friends example, your join table might be called “friendships”. But this method has some weaknesses. One is that it can greatly increase the number of tables in your database, and may make it hard to tell apart standard tables (nodes) from join tables (edges) - which makes it more difficult for new developers to comprehend the database architecture. Another problem is that ORMs, which work quite well for mapping node (model) tables, generally have a much harder time mapping edges. (Witness all the thrashing about that happened during the development of has_and_belongs_to_many :through in ActiveRecord.)</p>
<p>But the biggest weakness is that queries against relationship data - be it in join table or a reference link - are extremely unwieldy. In a SQL database it typically leads to recursive joins, which tend to lead to long, incomprehensible SQL statements and unpredictable performance.</p>
<p>A graph database is designed to represent this type of information, so it models the data more naturally. It’s also designed to query it: you can walk the data in a convenient and performant manner.</p>
<p>I’ve yet to try using a graph database, but the concept is intriguing. It’s yet another reminder that not every data modeling problem can be solved with the same hammer.</p>
<p>Further reading:</p>
<ul>
<li><a href='http://www.slideshare.net/emileifrem/neo4j-the-benefits-of-graph-databases-oscon-2009'>Presentation on graph databases with some enlightening diagrams</a></li>
<li><a href='http://neo4j.org/'>Neo4j</a></li>
<li><a href='http://www.kobrix.com/hgdb.jsp'>HyperGraphDB</a></li>
<li><a href='http://wiki.github.com/tinkerpop/gremlin/'>Gremlin, a graph query language</a></li>
<li><a href='http://techportal.ibuildings.com/2009/09/07/graphs-in-the-database-sql-meets-social-networks/'>Some tricky methods for modeling graphs in SQL</a></li>
<li><a href='http://highscalability.com/neo4j-graph-database-kicks-buttox'>High Scalability on Neo4j</a></li>
</ul>Grown, Not Builthttp://adam.heroku.com/past/2010/3/14/grown_not_built/2010-03-14T11:40:19-07:002010-03-14T11:40:19-07:00Adam Wiggins<blockquote>
<p>We just don’t write or release software the way we used to. Software isn’t so much built as it is grown. Software isn’t shipped … it’s simply made available by, often literally, the flip of a switch. This is not your father’s software. 21st century development is a seamless path from innovation to release where every phase of development, including release, is happening all the time. Users are on the inside of the firewall in that respect and feedback is constant. If a product isn’t compelling we find out much earlier and it dies in the data center. I fancy these dead products serve to enrich the data center, a digital circle of life where new products are built on the bones of the ones that didn’t make it.</p>
</blockquote><blockquote>
<p>We just don’t write or release software the way we used to. Software isn’t so much built as it is grown. Software isn’t shipped … it’s simply made available by, often literally, the flip of a switch. This is not your father’s software. 21st century development is a seamless path from innovation to release where every phase of development, including release, is happening all the time. Users are on the inside of the firewall in that respect and feedback is constant. If a product isn’t compelling we find out much earlier and it dies in the data center. I fancy these dead products serve to enrich the data center, a digital circle of life where new products are built on the bones of the ones that didn’t make it.</p>
</blockquote>
<p>From <a href='http://googletesting.blogspot.com/2010/02/testing-in-data-center-manufacturing-no.html'>Testing in the Data Center (Manufacturing No More)</a> by James A. Whittaker</p>An HTML5 Offline App Examplehttp://adam.heroku.com/past/2010/2/25/an_html5_offline_app_example/2010-02-25T09:42:33-08:002010-02-25T09:42:33-08:00Adam Wiggins<p>If you’ve used GMail, Google Calendar, or other Google web apps on the iPhone, you’ve probably noticed that they store the app code in a local cache. Only the messages (or day’s events, or other dynamic data) are fetched when you load the app. This is because they use HTML5’s capabilities for offline caching.</p><p>If you’ve used GMail, Google Calendar, or other Google web apps on the iPhone, you’ve probably noticed that they store the app code in a local cache. Only the messages (or day’s events, or other dynamic data) are fetched when you load the app. This is because they use HTML5’s capabilities for offline caching.</p>
<p><a href='http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html'>The HTML5 draft has a simple clock example</a>, which shows how you can specify which files should be cached locally for offline use using something called a <strong>cache manifest</strong>.</p>
<p><a href='http://github.com/adamwiggins/cachemanifest'>I turned their example code into an app</a> deployable to Heroku. <a href='http://cachemanifest.heroku.com/'>Here's the live demo.</a> It should work in recent version of Firefox (which prompts you to allow offline storage) and Safari (which doesn’t). Chrome doesn’t seem to support it yet.</p>
<p>Basically this boils down to some static HTML, CSS, and javascript; the cache manifest is the one additional piece of the puzzle, which tells which files to cache. Its format is extremely simple:</p>
<pre>
CACHE MANIFEST
clock.html
clock.css
clock.js
</pre>
<p>The one potential gotcha is that the cache manifest has to be served with content type text/cache-manifest. You can verify the content type is correct with curl:</p>
<pre>
$ curl -I http://cachemanifest.heroku.com/clock.manifest
HTTP/1.1 200 OK
Server: nginx/0.6.39
Date: Thu, 25 Feb 2010 02:53:24 GMT
Content-Type: text/cache-manifest
</pre>Above the Waterhttp://adam.heroku.com/past/2010/2/22/above_the_water/2010-02-22T14:04:39-08:002010-02-22T14:04:39-08:00Adam Wiggins<blockquote>
<p>A PaaS <span>Platform as a Service</span> environment is a bit like a swan on a pond – graceful and elegant above the water, and paddling its little legs off below the water. The aforementioned abstraction provides the elegant user experience “above the water,” while high levels of automation provide the “paddling” beneath the surface.</p>
</blockquote><blockquote>
<p>A PaaS <span>Platform as a Service</span> environment is a bit like a swan on a pond – graceful and elegant above the water, and paddling its little legs off below the water. The aforementioned abstraction provides the elegant user experience “above the water,” while high levels of automation provide the “paddling” beneath the surface.</p>
</blockquote>
<p>From <a href='http://www.ebizq.net/topics/cloud_computing/features/12279.html'>Don't Pass on PaaS</a> by Sam Charrington</p>Uncertaintyhttp://adam.heroku.com/past/2010/2/11/uncertainty/2010-02-11T22:52:57-08:002010-02-11T22:52:57-08:00Adam Wiggins<p>Kevin Kelly writes on <a href='http://www.kk.org/thetechnium/archives/2010/01/the_2-billion-e.php'>how the internet has changed how he thinks:</a></p><p>Kevin Kelly writes on <a href='http://www.kk.org/thetechnium/archives/2010/01/the_2-billion-e.php'>how the internet has changed how he thinks:</a></p>
<blockquote>
<p>Uncertainty is a kind of liquidity. I think my thinking has become more liquid. It is less fixed, as text in a book might be, and more fluid, as say text in Wikipedia might be. My opinions shift more. My interests rise and fall more quickly. I am less interested in Truth, with a capital T, and more interested in truths, plural. I feel the subjective has an important role in assembling the objective from many data points. The incremental plodding progress of imperfect science seems the only way to know anything.</p>
</blockquote>