2º Projeto de MC104 - SISTEMAS OPERACIONAIS

Conceito de Tempo em "Real-Time Systems"

 

Introduction

What is a real-time system? Many people define real-time in different ways, but one of the best is a system that must perform to some specific time constraint. Real-time or real-world software must interact with the outside world while relying heavily on some factor involving time. This time factor can be anywhere from initiating a signal to ring a bell on the hour to measuring the temperature of a nuclear reactor core over 1000 times a second. Both of the above examples involve time and both of the above factors are time critical. If the bell is to sound on the hour, the system would be considered a failure if the bell sounded somewhere between five before to five after the hour. By the same token, the measuring of the reactor core temperature would be considered a failure if the system failed to read the temperature for a second and the reactor overheated.

The primary focus of this experiment will be "time." The experiment will show you several methods by which to measure time in your applications. It will introduce the concept of time and present several important features that need to be understood about measuring time in an application. Finally, it will provide pointers to other methods that can be used to measure the time and performance of applications.

 

Objectives

The following are the primary objectives of this experiment:

 

Description

In physics, the Heisenberg Uncertainty Principle states that the position and momentum of an electron can never be measured with certainty. In a similar manner, the time within the computer can never be measured with certainty. First and foremost, a computer is a digital machine. Therefore, its "world" is subdivided into a series of steps. Whether these steps are 1 second or 1 nanosecond, a computer always has some physical limit to which it can measure time. In addition, the act of measuring time has a profound affect on the time that is being measured. The act of retrieving the time from the system clock and copying into a local memory area where your program can access it is going to take a specified amount of time. Depending on whether or not the writers of the operating system compensated for this time (which is unlikely since it is nondeterministic), the measured time can be somewhat imprecise.

Therefore, this experiment will measure the uncertainty with which time can be measured in the system. For the purpose of this experiment, the system call gettimeofday(...) will be used. gettimeofday(...) is a system call originating in the BSD4.3 version of UNIX. (The man command can be used to obtain more information on the system call). It returns a timeval structure (see structure below) which contains the number of seconds and microseconds that have elapsed since January 1, 1970 at 00:00 (the time that is considered the birth of UNIX). From the format of the timeval structure it is apparent that this system call will be no more accurate than 1 microsecond. However, it would be nice to know just how accurate this call is. The purpose of the first example experiment is to determine exactly that.

  struct timeval  {
    int tv_sec;
    int tv_usec;
  };

Example Program

Looking at the code needed to determine the resolution or accuracy of the gettimeofday(...) call, several interesting points are quickly noted. In order to obtain the most accurate results, the only item performed in the first for loop is the gettimeofday(...) call. Whenever a measurement such as this is performed, it is important to perform only the minimum amount of calculations necessary to make the measurement (in this case, the incrementing of the for loop) in order to keep from affecting the value that is being measured.

Next, two values are calculated, the first, is the average time that it takes to complete the call. This is found by taking the sum total of all the times and dividing it by the number of calls. The second is the maximum time, this is found by looking at each value and comparing it the previous maximum value to determine which is higher, the highest value is stored. The displayed values are only displayed to microsecond precision, this is because the gettimeofday(...) call only returns a value of microseconds, so anything beyond that is bound to be "uncertain."

Running the above program on the benchmark machine, a 150 MHz Pentium running Linux 2.0.27, the following results were obtained (times are in milliseconds).

Run

# Load

Average(mseg)

Maximum(mseg)

1

1

0.003

0.278

2

1

0.003

0.519

3

1

0.003

0.332

4

2

0.006

42.705

5

2

0.006

40.068

6

2

0.006

22.953

The load is defined as the number of processes able to run over the previous second. A load of 0 indicates that there were no processes able to run during the last second. A load of 1 indicates that one process was able to run. In a single processor environment, that process would get the entire CPU for the entire second (less operating system overhead). A load of 2 indicates that two processes were able to run during the past second. In this case, they would each get half of the processor for that time (assuming they are running at the same priority, priorities and scheduling are discussed in a later experiment). The load can be used to determine how the results of an experiment changes when the system is performing operations in addition to the experiment. A load of one is easily placed on a machine simply by writing a program that performs an infinite loop. A program that simulates an arbitrary load of is provided in the source code. The load can be viewed by using the w command.

 

Follow On Assignment

Each experiment has a follow assignment that needs to be completed. These assignments are to be completed in a specified format and contain all of the information listed below. A description of the report format is provided.

Working With the Sample Program

The first part of the assignment is to compile and execute (instructions are available) the above program on several different operating systems and machines. Different computers will give different results. Try running the program on the Linux and Lynx machines in the real-time lab, these machines have a relatively light load and should provide consistent results. Then try running the program on the schools servers such as erau or luke. These machines have a heavy load and will probably have inconsistent results. In addition the "worst case" will probably be far from the "average." Record the results in a chart as described in the report format pages.

Modifying the Sample Program

clock_gettime(...) is a real-time, POSIX.1b compliant function and can also be used to obtain a time measurement. However, it is not available on all operating systems. At ERAU, the Lynx operating system has the clock_gettime(...) system call available for use. (Depending on the system, special compile-time options may be needed, see the instructions for information on compiling options on the different ERAU systems). clock_gettime(...) can use several different "clocks" depending on the system. To complete this experiment, modify the above program to use the clock_gettime(...) function. Again, record the results in the chart as described in the report format section.

Assignment Summary

 

Additional Information

There are several other ways to measure the performance of applications. Many UNIXs provide a method by which to compile information into the actual program which can then be used to keep performance statistics. This is called profiling and can be found in the UNIX documentation. Additionally, several of the modern UNIX shells have the ability to measure the time that a program spends executing. See the shell documentation for more information on this method, search for the keyword time.

 

Due Date

 


Luís Fernando Faina
Last modified: Wed Sep 4 08:27:33 2002