Did they release any information on the reliability of the storage media or data verification? Besides size and access speed, that was the next biggest point requested for a "data archive" in our surveys. Otherwise, this does look like it would meet one of
the Big Data use cases nicely.
From: owner-discuss@xxxxxxxxxxxxxxxxxxxxxxx [owner-discuss@xxxxxxxxxxxxxxxxxxxxxxx] on behalf of Foster, Ian T. [foster@xxxxxxx]
Sent: Thursday, March 12, 2015 4:12 PM
To: discuss@xxxxxxxxxxxxxxxxxxxxxxx
Subject: Re: Google "Nearline" service
We have S3 support in Globus; we want to add Glacier supportâquite a few campuses are asking for that. One day soon I hope ...
âRight, the pricing model for AWS Glacier definitely gets you when you do retrievals. Nearline looks a million times better in this regard.
Also, I'm thinking that access to the data would be mediated by a service layer. The service could make informed decisions (based on recent access) about which objects to keep in online vs nearline storage.
It maybe as simple as keep everything in nearline. On access move to online and keep for the next n days.
Notice the retrieval throughput scales with the amount of data you have in storage. 4 MB/s per TB of storage. So at PB-scale this is looking pretty good.
-john
Hi Johns Towns and Readey,
JR: This is a pretty promising service! The discussion on hacker news was pretty interesting as well.
JT: The cost for retrieval is pretty low, and they obliquely compare quite favorably it to Glacier. Supposedly very, very fast retrieval speeds too.
-Matt
|