1. Homepage
  2. Programming
  3. CS202 Operating Systems: HW 9: Disk performance, file systems

CS202 Operating Systems: HW 9: Disk performance, file systems

Engage in a Conversation
NYUOperating SystemsCS202CSCI202Disk performanceFile systemsDisk SchedulingC

CS202: HW 9: Disk performance, file systems

CourseNana.COM

These problems should be done on your own. We're not going to be grading them strictly (we'll mainly look at whether you attempted them). But they will be reinforcing knowledge and skills, so you should totally work through them carefully. CourseNana.COM

mmap()

Consider the following code excerpt that uses mmap(): CourseNana.COM

int main(int argc, char* argv[]) {
    // ...
    // readme.txt is a file that is 5KB in length.
    int fd = open("readme.txt", O_RDWR, 0);

    char *map1 = (char*)mmap(NULL, 5120, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
    char *map2 = (char*)mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

    // assume that neither mmap call fails

    char x = 0;
    char y = 0;
    // Line 1
    for (int i = 0; i < 5120; i++) {
        x = map1[i];
        y = map2[i % 4096]; 
    }
    // Line 2
    // ...
}

// The signature of mmap is:
//
//   void* mmap(void *addr, size_t len, int prot, int flags, int fd, off_t offset);
//   
// If addr is NULL, the kernel chooses the start virtual address.
//
// If not already in memory, disk blocks are fetched into physical pages on access.
//
// Each separate call to mmap() with the same fd maps the file in a separate
// location and does not undo prior mappings. This means that in the example
// above, the file can be read and written from multiple virtual addresses
// within the same address space.

Assume the operating system minimizes the number of virtual and physical pages required in order to implement mmap(). CourseNana.COM

  1. How many last-level (level-4) page table entries are created or modified as a result of the two mmap() calls? CourseNana.COM

  2. At line 2, how many physical pages are mapped into the process as a result of the mmap() calls? CourseNana.COM

Context switches

In this question, you will implement swtch(), which switches between two user-level threads. You will do so for a user-level threading package, running on the TeensyArch processor. TeensyArch has 4 general registers, %r0-%r3, a stack pointer, a base (or frame) pointer, and an instruction pointer %rip. Assume the same stack frame structure as the architecture we’ve been covering in class (x86); further, all registers need to be saved by a function’s callee (that is, registers are callee-saved, also known as call-preserved). CourseNana.COM

Fill out swtch(). Below are definitions, declarations, and utility functions that you can use. CourseNana.COM

struct thread {
    int thread_id;
    uint64_t stack;
    /* ... */
};

enum register {
    R0,
    R1,
    R2,
    R3,
    RBP,
    RSP
};

// Push CPU's register r to the stack
void push_register(register r);

// Pop from the stack and into the CPU's register r
void pop_register(register r);

// Returns the CPU's current value of register r. 
uint64_t read_register(register r);

// Update the CPU's register r so it holds value `value`.
void write_register(register r, uint64_t value);


// Context switch from thread t1 to thread t2.
void swtch(struct thread *t1, struct thread *t2) {
    // On entry this function is run by thread t1.

    // Your code here. We have started it for you.
    push_register(RBP);
    push_register(R0);

    // YOUR CODE HERE

    return; // The function should return to the
            // point where thread t2 called swtch().
}

Polling vs. interrupts

As discussed in class, two ways for an operating system to become aware of external events associated with a device are interrupts and polling. We observed that if a computer were receiving many interrupts, it might spend all of its time processing them and not get other work done; in that case, the operating system should switch to polling the device. Now consider the following: CourseNana.COM

  • A computer has an attached keyboard. The keyboard has a 1024-byte internal memory buffer to hold the codes of recently-pressed keys, each of which consumes 2 bytes of buffer space. (The buffer is a FIFO, which for our purposes means that the OS simply reads from it and doesn’t manage the memory; if this parenthetical confuses you, you can ignore it.) CourseNana.COM

  • This computer and its OS take 1 microsecond (10 − 6 seconds) to handle an interrupt from the keyboard. That duration includes everything: reading from the keyboard’s buffer, determining which key was pressed, and painting the appropriate letter to the screen. CourseNana.COM

  • Assume that polling requires a fixed cost of 1 microsecond per poll. Further assume that, per poll, the operating system can read an arbitrary amount of the keyboard’s internal memory buffer, up to the entire size of that buffer. CourseNana.COM

  • Assume that, if polling, the operating system checks the device in question every 200 milliseconds. CourseNana.COM

  • Assume that humans are sensitive to lags of 100 milliseconds or greater. Specifically, if a human types a letter, that letter must appear on the screen less than 100 milliseconds after the human types it, to avoid annoyance. CourseNana.COM

  • You type exceptionally quickly: 200 words per minute. Assume that the average word has 7 letters, including the space at the end of the word. CourseNana.COM

  • Each key code (each letter, in other words) generates a separate interrupt. CourseNana.COM

  1. How many interrupts per second would your typing generate on average? Show your work.
  2. Should the computer use polling or interrupts to handle your fast typing? Explain why your choice is acceptable and the other choice is not. Do not use more than three sentences.

Disk performance

Consider a disk with the following characteristics: CourseNana.COM

  • The disk rotates at 12,000 RPM (rotations per minute)
  • The disk has 10 platters (and 10 corresponding heads); the cost to change which head is active is zero
  • Each sector is 512 bytes
  • There are 1024 sectors per track (we are ignoring the fact that the number of sectors per track varies on a real disk)
  • There are 4096 tracks per platter
  • The average seek time is 15 ms.
  • Ignore the time to transfer the bits from the disk to memory; that is, once the disk head is positioned over the sector, the transfer happens instantaneously.
  1. What is the storage capacity of the disk in bytes or gigabytes? (Explain briefly.)
  2. What is the sequential transfer bandwidth, expressed in bytes/second or megabytes/second? (Explain briefly.)
  3. Now assume that the disk with the above characteristics is given a never-ending stream of requests to read one sector at a time, with each request chosen randomly from all possible sectors on the disk. Assume that these read requests are scheduled in FIFO order. State the effective long-term transfer rate that the disk can sustain, expressed in bytes/second or kilobytes/second, and explain briefly.

In doing the third question, the following may be useful: CourseNana.COM

  • You can (and probably should) make several percentage point approximations, for example 4096 can be represented as 4000, and 13 × 7.5 is approximately 100.
  • The term "long-term transfer rate" refers to R/X, where R is the number of bytes to transfer in each read, and X is the average length of time that a read takes.

Disk scheduling

Disk requests come into the disk driver for tracks 10, 22, 20, 2, 40, 6, and 38, in that order. A seek takes 6 msec per track moved (note that, among other simplifications, we are not taking into account the length of the seek). How much seek time is needed for the following scheduling algorithms? CourseNana.COM

(a) SSTF CourseNana.COM

(b) LOOK (SCAN, but doesn't move to the end) CourseNana.COM

In all cases, the arm is initially at track 20. (Note that SCAN is a synonym for the elevator algorithm.) CourseNana.COM

Adapted from Tanenbaum Chapter 5 Number 24. CourseNana.COM

Get in Touch with Our Experts

WeChat WeChat
Whatsapp WhatsApp
NYU代写,Operating Systems代写,CS202代写,CSCI202代写,Disk performance代写,File systems代写,Disk Scheduling代写,C代写,NYU代编,Operating Systems代编,CS202代编,CSCI202代编,Disk performance代编,File systems代编,Disk Scheduling代编,C代编,NYU代考,Operating Systems代考,CS202代考,CSCI202代考,Disk performance代考,File systems代考,Disk Scheduling代考,C代考,NYUhelp,Operating Systemshelp,CS202help,CSCI202help,Disk performancehelp,File systemshelp,Disk Schedulinghelp,Chelp,NYU作业代写,Operating Systems作业代写,CS202作业代写,CSCI202作业代写,Disk performance作业代写,File systems作业代写,Disk Scheduling作业代写,C作业代写,NYU编程代写,Operating Systems编程代写,CS202编程代写,CSCI202编程代写,Disk performance编程代写,File systems编程代写,Disk Scheduling编程代写,C编程代写,NYUprogramming help,Operating Systemsprogramming help,CS202programming help,CSCI202programming help,Disk performanceprogramming help,File systemsprogramming help,Disk Schedulingprogramming help,Cprogramming help,NYUassignment help,Operating Systemsassignment help,CS202assignment help,CSCI202assignment help,Disk performanceassignment help,File systemsassignment help,Disk Schedulingassignment help,Cassignment help,NYUsolution,Operating Systemssolution,CS202solution,CSCI202solution,Disk performancesolution,File systemssolution,Disk Schedulingsolution,Csolution,