Scaling *up* Hadoop for under-100-GB jobs

· Architecture, Hadoop
Authors

Scaling *up* Hadoop for under-100 GB jobs

“Nobody ever got fired for buying a cluster” –

A very interesting Microsoft research paper implying:

  • Most common huge jobs are still under 100GB (yes, including the elephant friendly Facebook)
  • Addressing the “problematic” issue of “when and how to distribute with Hadoop”, if any?
  • Addressing the cost efficiency (yes, watt and heat counts…) of scaling up, even with Hadoop, for jobs under 100GB

(Reaserch by Raja AppuswamyChristos GkantsidisDushyanth NarayananOrion Hodson, and Antony Rowstron)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: