
HDFDirector |
ptolemy.domains.hdf.kernel.HDFDirector |
The Heterochronous Dataflow (HDF) domain is an extension of the
Synchronous Dataflow (SDF) domain and implements the HDF model of
computation [1]. In SDF, the set of port rates (called rate signatures)
of an actor are constant. In HDF, however, rate signatures are allowed
to change between iterations of the HDF schedule.
<p>
This director is often used with HDFFSMDirector. The HDFFSMDirector
governs the execution of a modal model. The change of rate signatures can
be modeled by state transitions of the modal model, in which each state
refinement infers a set of rate signatures. Within each state, the HDF
model behaves like an SDF model.
<p>
This director recomputes the schedules dynamically. To improve efficiency,
this director uses a CachedSDFScheduler. A CachedSDFScheduler caches
schedules labeled by their corresponding rate signatures, with the most
recently used at the beginning of the queue. Therefore, when a state in HDF
is revisited, the schedule identified by its rate signatures in the cache
is used. We do not need to recompute the schedule.
<p>
The size of the cache in the CachedSDFScheduler is set by the
<i>scheduleCacheSize</i> parameter of HDFDirector. The default value of
this parameter is 100. If the cache is full, the least recently used
schedule (at the end of the cache) is discarded.
<p>
<b>References</b>
<p>
<OL>
<LI>
A. Girault, B. Lee, and E. A. Lee,
``<A HREF="http://ptolemy.eecs.berkeley.edu/papers/98/starcharts">
Hierarchical Finite State Machines with Multiple Concurrency Models</A>,
'' April 13, 1998.</LI>
</ol>
Author(s): Ye Zhou. Contributor: Brian K. Vogel
Version:$Id: HDFDirector.doc.html,v 1.1 2006/02/22 18:40:27 mangal Exp $
Pt.Proposed Rating:Red (zhouye)
Pt.Accepted Rating:Red (cxh)
scheduleCacheSize
A parameter representing the size of the schedule cache to use.
If the value is less than or equal to zero, then schedules
will never be discarded from the cache. The default value is 100.
<p>
Note that the number of schedules in an HDF model can be
exponential in the number of actors. Setting the cache size to a
very large value is therefore not recommended if the
model contains a large number of HDF actors.