dr-xr-xr-x 1 root root 4096 Dec 15 13:00 / drwx-w---x 1 couch faculty 4096 Dec 15 13:00 /foo -rw--w---x 1 cat student 2965 Dec 15 13:25 /foo/bar
facultycreate a file inside
foois writeable to group, and has group
faculty. The meaning of writeable for a directory is the ability to create and delete files. So yes,
facultycan create (and delete) a file inside
/foo. Note that execute access is also present for
/, which is necessary.
studentread the file
/is executable by other, and has owner and group
root, so it is executable by
/foois visible to group
/foois executable by other, and has owner
faculty, so it is executable by group
student. Thus a user in group
/foo/baris writeable to group
student, but not readable.
/foo/barcannot be read by group
facultylist the directory
couch, so yes, listing will work.
(Some players deal with potential conflicts between host and guest use of devices by assigning devices to the guest or host operating system one at a time. More modern players utilize the hypervisor machine instructions to transparently virtualize I/O in the guest through the driver in the host.)
fdatasync. Unfortunately, the times for these calls do not reflect how fast data is written to the disk, but rather, how fast data is written to the paging subsystem and journal, respectively. When one calls
writefor a file, the call returns after the buffer is copied into the paging subsystem, but before the file is written to disk. If one calls
fdatasync, the call returns after the page cache is flushed to a permanent place, but that does not mean that the block is in its final state on disk, but that it is in a persistent location (e.g., a journal). Thus the time for a
writeis much too small, while the time for a
fdatasyncis potentially too small or too large, depending upon the kind of filesystem!
A read-only page cannot be changed, so there is no potential for critical sections that write and read data at the same time. Thus there is no need for knowledge of sharing of a read-only page because there is nothing to coordinate.
A copy-on-write page is nothing more than a special kind of read-only page, that becomes writeable when one process tries to write to it. That process gets a writeable copy while it remains read-only to the other processes. From the process's point of view, the page is always writeable, in the sense that a write will always succeed (by some mechanism unknown to the process). Again, no coordination between sharing processes is necessary, because the process of becoming writeable is transparent to them.
There is one thing the filesystem driver exploits about the raw disk driver and paging subsystem: local references to the same page are cached. This is the only knowledge the filesystem driver has to have about the paging subsystem.
In the case of "writing", processes serve as producers for the paging subsystem, by making changes in pages that become a job queue of things to post to the disk. The "consumer" is the "update" process ( also known as the disk scheduler), which writes these page changes to disk.
In the case of "reading", P/C relationships are more difficult to describe. The processes make "requests", which form the producer queue. The paging subsystem reads these requests into memory, forming a consumer of "requests".
Note that the architectures of the read and write cases are quite different.
Yes, for the write case.
Predictability has many forms. The most important form is that process execution is not unnecessarily probabilistic or unnecessarily prone to race conditions. There are two main tradeoffs between predictability and efficiency:
For a complete description of the initialization problem, see the answer to the next problem. Uninitialized data exposes bugs that are very difficult to locate, in which the initial value of a variable is used before initialization.
A classic example of the second issue -- atomicity -- is to
consider what would happen if calls to
write were not
atomic. This would allow many more outcomes from a pair of competing
write calls (e.g.,
write(1,"hi there\n", 9) and
write(1,"ho ho ho\n", 9)) than the two outcomes that we
discussed in class. Further, these variants would occur extremely
infrequently, making it very difficult to debug programs, and
would require programs to do their own explicit I/O locking to achieve
predictable results. Thus, the operating system provides this locking
sbrkin linux, its initial state is all zeros. Everything I have told you about operating systems makes this decision counter-intuitive: data structures should self-initialize; programmers should expect anything for initial values of allocated storage. Why did the designers of memory allocation make this rather obvious exception to the rule of saving time whenever possible?
Thus, the real reason that heap frames are initialized to zero is that it avoids exposing initialization bugs in processes for the heap, which would make processes much more difficult to debug.