(5) In HDFS (Hadoop DFS), applications create a new file by
writing the data to it. Once the file is closed, the bytes cannot be modified or removed except the new data can be added to
the file by reopening the file again. HDFS implements a singlewriter,
multiple-reader model. Every time a node opens a file, it
is granted a lease for the file, no other client can write to that file.
A hit to the NameNode permits the lease to be extended. Once the
lease expires and the file is closed, the changes are available to the
readers.
In Hadoop DFS Queues are allocated a fraction of the capacity
of the grid in the sense that a certain capacity of resources will
be at their disposal. All applications submitted in the queue will
have access to the capacity allocated to the queue. Administrators
can configure soft limits and optional hard limits on the capacity
allocated to each queue.
(6) Haystack: Even when a needle (a photo) in a Haystack is
stored in all physical volumes of a logical volume, all updates go
through the same logical volume and are applied to all the replicas
of a photo. A store machine receives requests to create, modify and
delete a photo. These operations are handled by the same store
machine.