Tuesday, July 17, 2007

Microsoft PhotoTours

This is an amazing piece of technology, presented at a TED conference.
http://www.collegehumor.com/video:1762315

More information is available on the Microsoft Research Website.

http://research.microsoft.com/IVM/PhotoTours/

There is also a live preview of this software available online. All you need to do is install a 5.5 MB Photosynth ActiveX. The address is :

http://labs.live.com/photosynth/



http://www.crazyleafdesign.com/blog/photosynth-prototype/

Friday, July 13, 2007

Peter Löthberg mother's Sigbritt, 75, has world's fastest broadband

Sigbritt, 75, has world's fastest broadband
Published: 12th July 2007 11:07 CET
Online: http://www.thelocal.se/7869/

A 75 year old woman from Karlstad in central Sweden has been thrust into the IT history books - with the world's fastest internet connection.

Sigbritt Löthberg's home has been supplied with a blistering 40 Gigabits per second connection, many thousands of times faster than the average residential link and the first time ever that a home user has experienced such a high speed.

But Sigbritt, who had never had a computer until now, is no ordinary 75 year old. She is the mother of Swedish internet legend Peter Löthberg who, along with Karlstad Stadsnät, the local council's network arm, has arranged the connection.

"This is more than just a demonstration," said network boss Hafsteinn Jonsson.

"As a network owner we're trying to persuade internet operators to invest in faster connections. And Peter Löthberg wanted to show how you can build a low price, high capacity line over long distances," he told The Local.

Sigbritt will now be able to enjoy 1,500 high definition HDTV channels simultaneously. Or, if there is nothing worth watching there, she will be able to download a full high definition DVD in just two seconds.

The secret behind Sigbritt's ultra-fast connection is a new modulation technique which allows data to be transferred directly between two routers up to 2,000 kilometres apart, with no intermediary transponders.

According to Karlstad Stadsnät the distance is, in theory, unlimited - there is no data loss as long as the fibre is in place.

"I want to show that there are other methods than the old fashioned ways such as copper wires and radio, which lack the possibilities that fibre has," said Peter Löthberg, who now works at Cisco.

Cisco contributed to the project but the point, said Hafsteinn Jonsson, is that fibre technology makes such high speed connections technically and commercially viable.

"The most difficult part of the whole project was installing Windows on Sigbritt's PC," said Jonsson.

The Local (news@thelocal.se/08 656 6518)

Monday, July 09, 2007

chown on linux using c++

lchown
NAME
lchown - change the owner and group of a symbolic link

#include

int lchown(const char *path, uid_t owner, gid_t group);

DESCRIPTION
The lchown() function shall be equivalent to chown(), except in the case where the named file is a symbolic link. In this case, lchown() shall change the ownership of the symbolic link file itself, while chown() changes the ownership of the file or directory to which the symbolic link refers.

RETURN VALUE
Upon successful completion, lchown() shall return 0. Otherwise, it shall return -1 and set errno to indicate an error.

ERRORS
The lchown() function shall fail if:

EACCES
Search permission is denied on a component of the path prefix of path.
EINVAL
The owner or group ID is not a value supported by the implementation.
ELOOP
A loop exists in symbolic links encountered during resolution of the path argument.
ENAMETOOLONG
The length of a pathname exceeds {PATH_MAX} or a pathname component is longer than {NAME_MAX}.
ENOENT
A component of path does not name an existing file or path is an empty string.
ENOTDIR
A component of the path prefix of path is not a directory.
EOPNOTSUPP
The path argument names a symbolic link and the implementation does not support setting the owner or group of a symbolic link.
EPERM
The effective user ID does not match the owner of the file and the process does not have appropriate privileges.
EROFS
The file resides on a read-only file system.
The lchown() function may fail if:

EIO
An I/O error occurred while reading or writing to the file system.
EINTR
A signal was caught during execution of the function.
ELOOP
More than {SYMLOOP_MAX} symbolic links were encountered during resolution of the path argument.
ENAMETOOLONG
Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}.
The following sections are informative.

EXAMPLES
Changing the Current Owner of a File
The following example shows how to change the ownership of the symbolic link named /modules/pass1 to the user ID associated with "jones" and the group ID associated with "cnd".

The numeric value for the user ID is obtained by using the getpwnam() function. The numeric value for the group ID is obtained by using the getgrnam() function.

#include
#include
#include
#include

struct passwd *pwd;
struct group *grp;
char *path = "/modules/pass1";
...
pwd = getpwnam("Sri_Chinmoy");
grp = getgrnam("guru");
lchown(path, pwd->pw_uid, grp->gr_gid);

Advanced Unix Programming Source

A set of handy function and class to perform some basics
UNIX tasks in c++

http://basepath.com/aup/ex/index.html

Thursday, July 05, 2007

Thread POSIX Thread basic

Further Threads Programming:Thread Attributes (POSIX)

PART I
API covered is for POSIX threads only. Otherwise, the functionality for Solaris threads and pthreads is largely the same.

Attributes

Attributes are a way to specify behavior that is different from the default. When a thread is created with pthread_create() or when a synchronization variable is initialized, an attribute object can be specified. Note: however that the default atributes are usually sufficient for most applications.

Important Note: Attributes are specified only at thread creation time; they cannot be altered while the thread is being used.

Thus three functions are usually called in tandem

* Thread attibute intialisation -- pthread_attr_init() create a default pthread_attr_t tattr
* Thread attribute value change (unless defaults appropriate) -- a variety of pthread_attr_*() functions are available to set individual attribute values for the pthread_attr_t tattr structure. (see below).
* Thread creation -- a call to pthread_create() with approriate attribute values set in a pthread_attr_t tattr structure.

The following code fragment should make this point clearer:

#include

pthread_attr_t tattr;
pthread_t tid;
void *start_routine;
void arg
int ret;

/* initialized with default attributes */
ret = pthread_attr_init(&tattr);

/* call an appropriate functions to alter a default value */
ret = pthread_attr_*(&tattr,SOME_ATRIBUTE_VALUE_PARAMETER);

/* create the thread */
ret = pthread_create(&tid, &tattr, start_routine, arg);

In order to save space, code examples mainly focus on the attribute setting functions and the intializing and creation functions are ommitted. These must of course be present in all actual code fragtments.

An attribute object is opaque, and cannot be directly modified by assignments. A set of functions is provided to initialize, configure, and destroy each object type. Once an attribute is initialized and configured, it has process-wide scope. The suggested method for using attributes is to configure all required state specifications at one time in the early stages of program execution. The appropriate attribute object can then be referred to as needed. Using attribute objects has two primary advantages:

* First, it adds to code portability. Even though supported attributes might vary between implementations, you need not modify function calls that create thread entities because the attribute object is hidden from the interface. If the target port supports attributes that are not found in the current port, provision must be made to manage the new attributes. This is an easy porting task though, because attribute objects need only be initialized once in a well-defined location.
* Second, state specification in an application is simplified. As an example, consider that several sets of threads might exist within a process, each providing a separate service, and each with its own state requirements. At some point in the early stages of the application, a thread attribute object can be initialized for each set. All future thread creations will then refer to the attribute object initialized for that type of thread. The initialization phase is simple and localized, and any future modifications can be made quickly and reliably.

Attribute objects require attention at process exit time. When the object is initialized, memory is allocated for it. This memory must be returned to the system. The pthreads standard provides function calls to destroy attribute objects.

Initializing Thread Attributes


The function pthread_attr_init() is used to initialize object attributes to their default values. The storage is allocated by the thread system during execution.

The function is prototyped by:

int pthread_attr_init(pthread_attr_t *tattr);

An example call to this function is:

#include
pthread_attr_t tattr;
int ret;
/* initialize an attribute to the default value */
ret = pthread_attr_init(&tattr);

The default values for attributes (tattr) are:

Attribute Value Result
scope PTHREAD_SCOPE_PROCESS New thread is unbound - not permanently attached to LWP.
detachstate PTHREAD_CREATE_JOINABLE Exit status and thread are preserved after the thread
terminates.
stackaddr NULL New thread has system-allocated stack address. 

stacksize 1 megabyte New thread has system-defined stack size.
priority New thread inherits parent thread priority.
inheritsched PTHREAD_INHERIT_SCHED New thread inherits parent thread scheduling priority.
schedpolicy SCHED_OTHER New thread uses Solaris-defined fixed priority scheduling;
threads run until preempted by a higher-priority thread or until they block or yield.

This function zero after completing successfully. Any other returned value indicates that an error occurred. If the following condition occurs, the function fails and returns an error value (to errno).

Destroying Thread Attributes

The function pthread_attr_destroy() is used to remove the storage allocated during initialization. The attribute object becomes invalid. It is prototyped by:

int pthread_attr_destroy(pthread_attr_t *tattr);

A sample call to this functions is:

#include
pthread_attr_t tattr;
int ret;
/* destroy an attribute */
ret = pthread_attr_destroy(&tattr);

Attribites are declared as for pthread_attr_init() above.

pthread_attr_destroy() returns zero after completing successfully. Any other returned value indicates that an error occurred.

Thread's Detach State

When a thread is created detached (PTHREAD_CREATE_DETACHED), its thread ID and other resources can be reused as soon as the thread terminates.

If you do not want the calling thread to wait for the thread to terminate then call the function pthread_attr_setdetachstate().

When a thread is created nondetached (PTHREAD_CREATE_JOINABLE), it is assumed that you will be waiting for it. That is, it is assumed that you will be executing a pthread_join() on the thread. Whether a thread is created detached or nondetached, the process does not exit until all threads have exited.

pthread_attr_setdetachstate() is prototyped by:

int pthread_attr_setdetachstate(pthread_attr_t *tattr,int detachstate);

pthread_attr_setdetachstate() returns zero after completing successfully. Any other returned value indicates that an error occurred. If the following condition occurs, the function fails and returns the corresponding value.

An example call to detatch a thread with this function is:

#include
pthread_attr_t tattr;
int ret;
/* set the thread detach state */
ret = pthread_attr_setdetachstate(&tattr,PTHREAD_CREATE_DETACHED);

Note - When there is no explicit synchronization to prevent it, a newly created, detached thread can die and have its thread ID reassigned to another new thread before its creator returns from pthread_create(). For nondetached (PTHREAD_CREATE_JOINABLE) threads, it is very important that some thread join with it after it terminates -- otherwise the resources of that thread are not released for use by new threads. This commonly results in a memory leak. So when you do not want a thread to be joined, create it as a detached thread.

It is quite common that you will wish to create a thread which is detatched from creation. The following code illustrates how this may be achieved with the standard calls to initialise and set and then create a thread:

#include
pthread_attr_t tattr;
pthread_t tid;
void *start_routine;
void arg
int ret;

/* initialized with default attributes */
ret = pthread_attr_init(&tattr);
ret = pthread_attr_setdetachstate(&tattr,PTHREAD_CREATE_DETACHED);
ret = pthread_create(&tid, &tattr, start_routine, arg);

The function pthread_attr_getdetachstate() may be used to retrieve the thread create state, which can be either detached or joined. It is prototyped by:

int pthread_attr_getdetachstate(const pthread_attr_t *tattr, int *detachstate);

pthread_attr_getdetachstate() returns zero after completing successfully. Any other returned value indicates that an error occurred.

An example call to this fuction is:

#include
pthread_attr_t tattr;
int detachstate;
int ret;

/* get detachstate of thread */
ret = pthread_attr_getdetachstate (&tattr, &detachstate);

Full Article here

http://www.cs.cf.ac.uk/Dave/C/node30.html


PART II

What Is a Thread? Why Use Threads

A thread is a semi-process, that has its own stack, and executes a given piece of code. Unlike a real process, the thread normally shares its memory with other threads (where as for processes we usually have a different memory area for each one of them). A Thread Group is a set of threads all executing inside the same process. They all share the same memory, and thus can access the same global variables, same heap memory, same set of file descriptors, etc. All these threads execute in parallel (i.e. using time slices, or if the system has several processors, then really in parallel).
The advantage of using a thread group instead of a normal serial program is that several operations may be carried out in parallel, and thus events can be handled immediately as they arrive (for example, if we have one thread handling a user interface, and another thread handling database queries, we can execute a heavy query requested by the user, and still respond to user input while the query is executed).
The advantage of using a thread group over using a process group is that context switching between threads is much faster then context switching between processes (context switching means that the system switches from running one thread or process, to running another thread or process). Also, communications between two threads is usually faster and easier to implement then communications between two processes.
On the other hand, because threads in a group all use the same memory space, if one of them corrupts the contents of its memory, other threads might suffer as well. With processes, the operating system normally protects processes from one another, and thus if one corrupts its own memory space, other processes won't suffer. Another advantage of using processes is that they can run on different machines, while all the threads have to run on the same machine (at least normally).

Creating And Destroying Threads

When a multi-threaded program starts executing, it has one thread running, which executes the main() function of the program. This is already a full-fledged thread, with its own thread ID. In order to create a new thread, the program should use the pthread_create() function. Here is how to use it:


#include        /* standard I/O routines                 */
#include      /* pthread functions and data structures */

/* function to be executed by the new thread */
void*
do_loop(void* data)
{
    int i;

    int i;   /* counter, to print numbers */
    int j;   /* counter, for delay        */
    int me = *((int*)data);     /* thread identifying number */

    for (i=0; i<10 color="brown" font="" for="" i="" j="">/* delay loop */
; printf("'%d' - Got '%d'\n", me, i); } /* terminate the thread */ pthread_exit(NULL); } /* like any C program, program's execution begins in main */ int main(int argc, char* argv[]) { int thr_id; /* thread ID for the newly created thread */ pthread_t p_thread; /* thread's structure */ int a = 1; /* thread 1 identifying number */ int b = 2; /* thread 2 identifying number */ /* create a new thread that will execute 'do_loop()' */ thr_id = pthread_create(&p_thread, NULL, do_loop, (void*)&a); /* run 'do_loop()' in the main thread as well */ do_loop((void*)&b); /* NOT REACHED */ return 0; }
A few notes should be mentioned about this program:
  1. Note that the main program is also a thread, so it executes the do_loop() function in parallel to the thread it creates.
  2. pthread_create() gets 4 parameters. The first parameter is used by pthread_create() to supply the program with information about the thread. The second parameter is used to set some attributes for the new thread. In our case we supplied a NULL pointer to tellpthread_create() to use the default values. The third parameter is the name of the function that the thread will start executing. The forth parameter is an argument to pass to this function. Note the cast to a 'void*'. It is not required by ANSI-C syntax, but is placed here for clarification.
  3. The delay loop inside the function is used only to demonstrate that the threads are executing in parallel. Use a larger delay value if your CPU runs too fast, and you see all the printouts of one thread before the other.
  4. The call to pthread_exit() Causes the current thread to exit and free any thread-specific resources it is taking. There is no need to use this call at the end of the thread's top function, since when it returns, the thread would exit automatically anyway. This function is useful if we want to exit a thread in the middle of its execution.
In order to compile a multi-threaded program using gcc, we need to link it with the pthreads library. Assuming you have this library already installed on your system, here is how to compile our first program:

gcc pthread_create.c -o pthread_create -lpthread 

The source code for this program may be found in the pthread_create.c file.

Synchronizing Threads With Mutexes

One of the basic problems when running several threads that use the same memory space, is making sure they don't "step on each other's toes". By this we refer to the problem of using a data structure from two different threads.
For instance, consider the case where two threads try to update two variables. One tries to set both to 0, and the other tries to set both to 1. If both threads would try to do that at the same time, we might get with a situation where one variable contains 1, and one contains 0. This is because a context-switch (we already know what this is by now, right?) might occur after the first tread zeroed out the first variable, then the second thread would set both variables to 1, and when the first thread resumes operation, it will zero out the second variable, thus getting the first variable set to '1', and the second set to '0'.

What Is A Mutex?

A basic mechanism supplied by the pthreads library to solve this problem, is called a mutex. A mutex is a lock that guarantees three things:
  1. Atomicity - Locking a mutex is an atomic operation, meaning that the operating system (or threads library) assures you that if you locked a mutex, no other thread succeeded in locking this mutex at the same time.
  2. Singularity - If a thread managed to lock a mutex, it is assured that no other thread will be able to lock the thread until the original thread releases the lock.
  3. Non-Busy Wait - If a thread attempts to lock a thread that was locked by a second thread, the first thread will be suspended (and will not consume any CPU resources) until the lock is freed by the second thread. At this time, the first thread will wake up and continue execution, having the mutex locked by it.
From these three points we can see how a mutex can be used to assure exclusive access to variables (or in general critical code sections). Here is some pseudo-code that updates the two variables we were talking about in the previous section, and can be used by the first thread:
lock mutex 'X1'.
set first variable to '0'.
set second variable to '0'.
unlock mutex 'X1'.


Meanwhile, the second thread will do something like this:

lock mutex 'X1'.
set first variable to '1'.
set second variable to '1'.
unlock mutex 'X1'.


Assuming both threads use the same mutex, we are assured that after they both ran through this code, either both variables are set to '0', or both are set to '1'. You'd note this requires some work from the programmer - If a third thread was to access these variables via some code that does not use this mutex, it still might mess up the variable's contents. Thus, it is important to enclose all the code that accesses these variables in a small set of functions, and always use only these functions to access these variables.



Creating And Initializing A Mutex

In order to create a mutex, we first need to declare a variable of type pthread_mutex_t , and then initialize it. The simplest way it by assigning it the PTHREAD_MUTEX_INITIALIZER constant. So we'll use a code that looks something like this:


pthread_mutex_t a_mutex = PTHREAD_MUTEX_INITIALIZER;


One note should be made here: This type of initialization creates a mutex called 'fast mutex'. This means that if a thread locks the mutex and then tries to lock it again, it'll get stuck - it will be in a deadlock.


There is another type of mutex, called 'recursive mutex', which allows the thread that locked it, to lock it several more times, without getting blocked (but other threads that try to lock the mutex now will get blocked). If the thread then unlocks the mutex, it'll still be locked, until it is unlocked the same amount of times as it was locked. This is similar to the way modern door locks work - if you turned it twice clockwise to lock it, you need to turn it twice counter-clockwise to unlock it. This kind of mutex can be created by assigning the constantPTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP to a mutex variable.

Locking And Unlocking A Mutex

In order to lock a mutex, we may use the function pthread_mutex_lock(). This function attempts to lock the mutex, or block the thread if the mutex is already locked by another thread. In this case, when the mutex is unlocked by the first process, the function will return with the mutex locked by our process. Here is how to lock a mutex (assuming it was initialized earlier):


int rc = pthread_mutex_lock(&a_mutex);
if (rc) { /* an error has occurred */
    perror("pthread_mutex_lock");
    pthread_exit(NULL);
}
/* mutex is now locked - do your stuff. */
.
.



After the thread did what it had to (change variables or data structures, handle file, or whatever it intended to do), it should free the mutex, using the pthread_mutex_unlock() function, like this:


rc = pthread_mutex_unlock(&a_mutex);
if (rc) {
    perror("pthread_mutex_unlock");
    pthread_exit(NULL);
}


Destroying A Mutex

After we finished using a mutex, we should destroy it. Finished using means no thread needs it at all. If only one thread finished with the mutex, it should leave it alive, for the other threads that might still need to use it. Once all finished using it, the last one can destroy it using the pthread_mutex_destroy() function:


rc = pthread_mutex_destroy(&a_mutex);


After this call, this variable (a_mutex) may not be used as a mutex any more, unless it is initialized again. Thus, if one destroys a mutex too early, and another thread tries to lock or unlock it, that thread will get a EINVAL error code from the lock or unlock function.



Using A Mutex - A Complete Example

After we have seen the full life cycle of a mutex, lets see an example program that uses a mutex. The program introduces two employees competing for the "employee of the day" title, and the glory that comes with it. To simulate that in a rapid pace, the program employs 3 threads: one that promotes Danny to "employee of the day", one that promotes Moshe to that situation, and a third thread that makes sure that the employee of the day's contents is consistent (i.e. contains exactly the data of one employee).
Two copies of the program are supplied. One that uses a mutex, and one that does not. Try them both, to see the differences, and be convinced that mutexes are essential in a multi-threaded environment.
The programs themselves are in the files accompanying this tutorial. The one that uses a mutex is employee-with-mutex.c. The one that does not use a mutex is employee-without-mutex.c. Read the comments inside the source files to get a better understanding of how they work.

Starvation And Deadlock Situations

Again we should remember that pthread_mutex_lock() might block for a non-determined duration, in case of the mutex being already locked. If it remains locked forever, it is said that our poor thread is "starved" - it was trying to acquire a resource, but never got it. It is up to the programmer to ensure that such starvation won't occur. The pthread library does not help us with that.

The pthread library might, however, figure out a "deadlock". A deadlock is a situation in which a set of threads are all waiting for resources taken by other threads, all in the same set. Naturally, if all threads are blocked waiting for a mutex, none of them will ever come back to life again. The pthread library keeps track of such situations, and thus would fail the last thread trying to call pthread_mutex_lock(), with an error of type EDEADLK. The programmer should check for such a value, and take steps to solve the deadlock somehow.

Refined Synchronization - Condition Variables

As we've seen before with mutexes, they allow for simple coordination - exclusive access to a resource. However, we often need to be able to make real synchronization between threads:
  • In a server, one thread reads requests from clients, and dispatches them to several threads for handling. These threads need to be notified when there is data to process, otherwise they should wait without consuming CPU time.
  • In a GUI (Graphical User Interface) Application, one thread reads user input, another handles graphical output, and a third thread sends requests to a server and handles its replies. The server-handling thread needs to be able to notify the graphics-drawing thread when a reply from the server arrived, so it will immediately show it to the user. The user-input thread needs to be always responsive to the user, for example, to allow her to cancel long operations currently executed by the server-handling thread.
All these examples require the ability to send notifications between threads. This is where condition variables are brought into the picture.


What Is A Condition Variable?

A condition variable is a mechanism that allows threads to wait (without wasting CPU cycles) for some even to occur. Several threads may wait on a condition variable, until some other thread signals this condition variable (thus sending a notification). At this time, one of the threads waiting on this condition variable wakes up, and can act on the event. It is possible to also wake up all threads waiting on this condition variable by using a broadcast method on this variable.
Note that a condition variable does not provide locking. Thus, a mutex is used along with the condition variable, to provide the necessary locking when accessing this condition variable.

Creating And Initializing A Condition Variable

Creation of a condition variable requires defining a variable of type pthread_cond_t, and initializing it properly. Initialization may be done with either a simple use of a macro named PTHREAD_COND_INITIALIZER or the usage of the pthread_cond_init() function. We will show the first form here:

pthread_cond_t got_request = PTHREAD_COND_INITIALIZER; 

This defines a condition variable named 'got_request', and initializes it.
Note: since the PTHREAD_COND_INITIALIZER is actually a structure, it may be used to initialize a condition variable only when it is declared. In order to initialize it during runtime, one must use the pthread_cond_init() function.

Signaling A Condition Variable

In order to signal a condition variable, one should either the pthread_cond_signal() function (to wake up a only one thread waiting on this variable), or the pthread_cond_broadcast() function (to wake up all threads waiting on this variable). Here is an example using signal, assuming 'got_request' is a properly initialized condition variable:

int rc = pthread_cond_signal(&got_request); 

Or by using the broadcast function:

int rc = pthread_cond_broadcast(&got_request); 

When either function returns, 'rc' is set to 0 on success, and to a non-zero value on failure. In such a case (failure), the return value denotes the error that occured (EINVAL denotes that the given parameter is not a condition variable. ENOMEM denotes that the system has run out of memory.
Note: success of a signaling operation does not mean any thread was awakened - it might be that no thread was waiting on the condition variable, and thus the signaling does nothing (i.e. the signal is lost).
It is also not remembered for future use - if after the signaling function returns another thread starts waiting on this condition variable, a further signal is required to wake it up.


Waiting On A Condition Variable

If one thread signals the condition variable, other threads would probably want to wait for this signal. They may do so using one of two functions, pthread_cond_wait() or pthread_cond_timedwait(). Each of these functions takes a condition variable, and a mutex (which should be locked before calling the wait function), unlocks the mutex, and waits until the condition variable is signaled, suspending the thread's execution. If this signaling causes the thread to awake (see discussion of pthread_cond_signal() earlier), the mutex is automagically locked again by the wait funciton, and the wait function returns.

The only difference between these two functions is that pthread_cond_timedwait() allows the programmer to specify a timeout for the waiting, after which the function always returns, with a proper error value (ETIMEDOUT) to notify that condition variable was NOT signaled before the timeout passed. The pthread_cond_wait() would wait indefinitely if it was never signaled.

Here is how to use these two functions. We make the assumption that 'got_request' is a properly initialized condition variable, and that 'request_mutex' is a properly initialized mutex. First, we try the pthread_cond_wait() function:


/* first, lock the mutex */
int rc = pthread_mutex_lock(&a_mutex);
if (rc) { /* an error has occurred */
    perror("pthread_mutex_lock");
    pthread_exit(NULL);
}
/* mutex is now locked - wait on the condition variable.             */
/* During the execution of pthread_cond_wait, the mutex is unlocked. */
rc = pthread_cond_wait(&got_request, &a_mutex);
if (rc == 0) { /* we were awakened due to the cond. variable being signaled */
               /* The mutex is now locked again by pthread_cond_wait()      */
    /* do your stuff... */
    .
}
/* finally, unlock the mutex */ 
pthread_mutex_unlock(&a_mutex);


Now an example using the pthread_cond_timedwait() function:


#include      /* struct timeval definition           */
#include        /* declaration of gettimeofday()       */

struct timeval  now;            /* time when we started waiting        */
struct timespec timeout;        /* timeout value for the wait function */
int             done;           /* are we done waiting?                */

/* first, lock the mutex */
int rc = pthread_mutex_lock(&a_mutex);
if (rc) { /* an error has occurred */
    perror("pthread_mutex_lock");
    pthread_exit(NULL);
}
/* mutex is now locked */

/* get current time */ 
gettimeofday(&now);
/* prepare timeout value */
timeout.tv_sec = now.tv_sec + 5
timeout.tv_nsec = now.tv_usec * 1000; /* timeval uses microseconds.         */
                                      /* timespec uses nanoseconds.         */
                                      /* 1 nanosecond = 1000 micro seconds. */

/* wait on the condition variable. */
/* we use a loop, since a Unix signal might stop the wait before the timeout */
done = 0;
while (!done) {
    /* remember that pthread_cond_timedwait() unlocks the mutex on entrance */
    rc = pthread_cond_timedwait(&got_request, &a_mutex, &timeout);
    switch(rc) {
        case 0:  /* we were awakened due to the cond. variable being signaled */
                 /* the mutex was now locked again by pthread_cond_timedwait. */
            /* do your stuff here... */
            .
            .
            done = 0;
            break;
        case ETIMEDOUT: /* our time is up */
            done = 0;
            break;
        default:        /* some error occurred (e.g. we got a Unix signal) */
            break;      /* break this switch, but re-do the while loop.   */
    }
}
/* finally, unlock the mutex */
pthread_mutex_unlock(&a_mutex);


As you can see, the timed wait version is way more complex, and thus better be wrapped up by some function, rather then being re-coded in every necessary location.


Note: it might be that a condition variable that has 2 or more threads waiting on it is signaled many times, and yet one of the threads waiting on it never awakened. This is because we are not guaranteed which of the waiting threads is awakened when the variable is signaled. It might be that the awakened thread quickly comes back to waiting on the condition variables, and gets awakened again when the variable is signaled again, and so on. The situation for the un-awakened thread is called 'starvation'. It is up to the programmer to make sure this situation does not occur if it implies bad behavior. Yet, in our server example from before, this situation might indicate requests are coming in a very slow pace, and thus perhaps we have too many threads waiting to service requests. In this case, this situation is actually good, as it means every request is handled immediately when it arrives.
Note 2: when the mutex is being broadcast (using pthread_cond_broadcast), this does not mean all threads are running together. Each of them tries to lock the mutex again before returning from their wait function, and thus they'll start running one by one, each one locking the mutex, doing their work, and freeing the mutex before the next thread gets its chance to run.

Destroying A Condition Variable

After we are done using a condition variable, we should destroy it, to free any system resources it might be using. This can be done using the pthread_cond_destroy(). In order for this to work, there should be no threads waiting on this condition variable. Here is how to use this function, again, assuming 'got_request' is a pre-initialized condition variable:


int rc = pthread_cond_destroy(&got_request);
if (rc == EBUSY) { /* some thread is still waiting on this condition variable */
    /* handle this case here... */
    .
    .
}


What if some thread is still waiting on this variable? depending on the case, it might imply some flaw in the usage of this variable, or just lack of proper thread cleanup code. It is probably good to alert the programmer, at least during debug phase of the program, of such a case. It might mean nothing, but it might be significant.



A Real Condition For A Condition Variable

A note should be taken about condition variables - they are usually pointless without some real condition checking combined with them. To make this clear, lets consider the server example we introduced earlier. Assume that we use the 'got_request' condition variable to signal that a new request has arrived that needs handling, and is held in some requests queue. If we had threads waiting on the condition variable when this variable is signaled, we are assured that one of these threads will awake and handle this request.
However, what if all threads are busy handling previous requests, when a new one arrives? the signaling of the condition variable will do nothing (since all threads are busy doing other things, NOT waiting on the condition variable now), and after all threads finish handling their current request, they come back to wait on the variable, which won't necessarily be signaled again (for example, if no new requests arrive). Thus, there is at least one request pending, while all handling threads are blocked, waiting for a signal.
In order to overcome this problem, we may set some integer variable to denote the number of pending requests, and have each thread check the value of this variable before waiting on the variable. If this variable's value is positive, some request is pending, and the thread should go and handle it, instead of going to sleep. Further more, a thread that handled a request, should reduce the value of this variable by one, to make the count correct.
Lets see how this affects the waiting code we have seen above.



/* number of pending requests, initially none */
int num_requests = 0;
.
.
/* first, lock the mutex */
int rc = pthread_mutex_lock(&a_mutex);
if (rc) { /* an error has occurred */
    perror("pthread_mutex_lock");
    pthread_exit(NULL);
}
/* mutex is now locked - wait on the condition variable */
/* if there are no requests to be handled.              */
rc = 0;
if (num_requests == 0)
    rc = pthread_cond_wait(&got_request, &a_mutex);
if (num_requests > 0 && rc == 0) { /* we have a request pending */
        /* do your stuff... */
        .
        .
        /* decrease count of pending requests */
        num_requests--;
    }
}
/* finally, unlock the mutex */
pthread_mutex_unlock(&a_mutex);

Full Article here

http://www.cs.kent.edu/~ruttan/sysprog/lectures/multi-thread/multi-thread.html