Main index | Section 9 | Options |
#include <sys/param.h>
#include <sys/proc.h>
#include <sys/epoch.h>
Epochs are allocated with epoch_alloc() and freed with epoch_free(). The flags passed to epoch_alloc determine whether preemption is allowed during a section or not (the default), as specified by EPOCH_PREEMPT. Threads indicate the start of an epoch critical section by calling epoch_enter(). The end of a critical section is indicated by calling epoch_exit(). The _preempt variants can be used around code which requires preemption. A thread can wait until a grace period has elapsed since any threads have entered the epoch by calling epoch_wait() or epoch_wait_preempt(), depending on the epoch_type. The use of a default epoch type allows one to use epoch_wait() which is guaranteed to have much shorter completion times since we know that none of the threads in an epoch section will be preempted before completing its section. If the thread can't sleep or is otherwise in a performance sensitive path it can ensure that a grace period has elapsed by calling epoch_call() with a callback with any work that needs to wait for an epoch to elapse. Only non-sleepable locks can be acquired during a section protected by epoch_enter_preempt() and epoch_exit_preempt(). INVARIANTS can assert that a thread is in an epoch by using in_epoch().
The epoch API currently does not support sleeping in epoch_preempt sections. A caller should never call epoch_wait() in the middle of an epoch section for the same epoch as this will lead to a deadlock.
By default mutexes cannot be held across epoch_wait_preempt(). To permit this the epoch must be allocated with EPOCH_LOCKED. When doing this one must be cautious of creating a situation where a deadlock is possible. Note that epochs are not a straight replacement for read locks. Callers must use safe list and tailq traversal routines in an epoch (see ck_queue). When modifying a list referenced from an epoch section safe removal routines must be used and the caller can no longer modify a list entry in place. An item to be modified must be handled with copy on write and frees must be deferred until after a grace period has elapsed.
The epoch_drain_callbacks() function is used to drain all pending callbacks which have been invoked by prior epoch_call() function calls on the same epoch. This function is useful when there are shared memory structure(s) referred to by the epoch callback(s) which are not refcounted and are rarely freed. The typical place for calling this function is right before freeing or invalidating the shared resource(s) used by the epoch callback(s). This function can sleep and is not optimized for performance.
int in_pcbladdr(struct inpcb *inp, struct in_addr *faddr, struct in_laddr *laddr, struct ucred *cred) { /* ... */ epoch_enter(net_epoch); CK_STAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { sa = ifa->ifa_addr; if (sa->sa_family != AF_INET) continue; sin = (struct sockaddr_in *)sa; if (prison_check_ip4(cred, &sin->sin_addr) == 0) { ia = (struct in_ifaddr *)ifa; break; } } epoch_exit(net_epoch); /* ... */ }Thread 2:
void ifa_free(struct ifaddr *ifa) {if (refcount_release(&ifa->ifa_refcnt)) epoch_call(net_epoch, &ifa->ifa_epoch_ctx, ifa_destroy); }
void if_purgeaddrs(struct ifnet *ifp) {
/* .... * IF_ADDR_WLOCK(ifp); CK_STAILQ_REMOVE(&ifp->if_addrhead, ifa, ifaddr, ifa_link); IF_ADDR_WUNLOCK(ifp); ifa_free(ifa); }
Thread 1 traverses the ifaddr list in an epoch. Thread 2 unlinks with the corresponding epoch safe macro, marks as logically free, and then defers deletion. More general mutation or a synchronous free would have to follow a call to epoch_wait().
EPOCH (9) | April 30, 2020 |
Main index | Section 9 | Options |
Please direct any comments about this manual page service to Ben Bullock. Privacy policy.
“ | Unix’s “power tools” are more like power switchblades that slice off the operator’s fingers quickly and efficiently. | ” |
— The Unix Haters' handbook |