tinyos.pdf

(185 KB) Pobierz
TinyOS: An Operating System for Sensor Networks
Jason Hill, Robert Szewczyk, Alec Woo, Philip Levis, Sam Madden, Cameron Whitehouse,
Joseph Polastre, David Gay, Cory Sharp, Matt Welsh,
Eric Brewer and David Culler
Abstract
We present TinyOS, a flexible, application-specific operating sys-
tem for sensor networks. Sensor networks consist of (potentially)
thousands of tiny, low-power nodes, each of which execute con-
current, reactive programs that must operate with severe memory
and power constraints. The sensor network challenges of limited
resources, event-centric concurrent applications, and low-power
operation drive the design of TinyOS. Our solution combines flex-
ible, fine-grain components with an execution model that supports
complex yet safe concurrent operations. TinyOS meets these chal-
lenges well and has become the platform of choice for sensor net-
work research; it is in use by over a hundred groups worldwide,
and supports a broad range of applications and research topics.
We provide a qualitative and quantitative evaluation of the system,
showing that it supports complex, concurrent programs with very
low memory requirements (many applications fit within 16KB of
memory, and the core OS is 400 bytes) and efficient, low-power
operation. We present our experiences with TinyOS as a platform
for sensor network innovation and applications.
through actuators, performing local data processing, trans-
mitting data, routing data for others, and participating in
various distributed processing tasks, such as statistical ag-
gregation or feature recognition. Many of these events,
such as radio management, require real-time responses.
This requires an approach to concurrency management that
reduces potential bugs while respecting resource and tim-
ing constraints.
3) Flexibility: The variation in hardware and applications
and the rate of innovation require a flexible OS that is both
application-specific to reduce space and power, and inde-
pendent of the boundary between hardware and software.
In addition, the OS should support fine-grain modularity
and interpositioning to simplify reuse and innovation.
4) Low Power: Demands of size and cost, as well as un-
tethered operation make low-power operation a key goal
of mote design. Battery density doubles roughly every 50
years, which makes power an ongoing challenge. Although
energy harvesting offers many promising solutions, at the
very small scale of motes we can harvest only microwatts
of power. This is insufficient for continuous operation of
even the most energy-efficient designs. Given the broad
range of applications for sensor networks, TinyOS must not
only address extremely low-power operation, but also pro-
vide a great deal of flexibility in power-management and
duty-cycle strategies.
In our approach to these requirements we focus on two
broad principles:
1 Introduction
Advances in networking and integration have enabled
small, flexible, low-cost nodes that interact with their en-
vironment and with each other through sensors, actuators
and communication. Single-chip systems are now emerg-
ing that integrate a low-power CPU and memory, radio
or optical communication [75], and MEMS-based on-chip
sensors. The low cost of these systems enables embedded
networks of thousands of nodes [18] for applications rang-
ing from environmental and habitat monitoring [11, 51],
seismic analysis of structures [10], and object localization
and tracking [68].
Sensor networks are a very active research space, with
ongoing work on networking [22, 38, 83], application sup-
port [25, 27, 49], radio management [8, 84], and secu-
rity [9, 45, 61, 81], as a partial list. A primary goal of
TinyOS is to enable and accelerate this innovation.
Four broad requirements motivate the design of TinyOS:
1) Limited resources: Motes have very limited physical
resources, due to the goals of small size, low cost, and low
power consumption. Current motes consist of about a 1-
MIPS processor and tens of kilobytes of storage. We do
not expect new technology to remove these limitations: the
benefits of Moore’s Law will be applied to reduce size and
cost, rather than increase capability. Although our current
motes are measured in square centimeters, a version is in
fabrication that measures less than 5 mm 2 .
2) Reactive Concurrency: In a typical sensor network
application, a node is responsible for sampling aspects of
its environment through sensors, perhaps manipulating it
Event Centric: Like the applications, the solution must be
event centric. The normal operation is the reactive ex-
ecution of concurrent events.
Platform for Innovation: The space of networked sensors
is novel and complex: we therefore focus on flexibility
and enabling innovation, rather then the “right” OS
from the beginning.
TinyOS is a tiny (fewer than 400 bytes), flexible oper-
ating system built from a set of reusable components that
are assembled into an application-specific system. TinyOS
supports an event-driven concurrency model based on split-
phase interfaces, asynchronous events , and deferred com-
putation called tasks . TinyOS is implemented in the nesC
language [24], which supports the TinyOS component and
concurrency model as well as extensive cross-component
optimizations and compile-time race detection. TinyOS
has enabled both innovation in sensor network systems and
a wide variety of applications. TinyOS has been under
development for several years and is currently in its third
generation involving several iterations of hardware, radio
Interface Description
ADC Sensor hardware interface
Clock Hardware clock
EEPROMRead/Write EEPROM read and write
HardwareId Hardware ID access
I2C Interface to I2C bus
Leds Red/yellow/green LEDs
MAC Radio MAC layer
Mic Microphone interface
Pot Hardware potentiometer for transmit power
Random Random number generator
ReceiveMsg Receive Active Message
SendMsg Send Active Message
StdControl Init, start, and stop components
Time Get current time
TinySec Lightweight encryption/decryption
WatchDog Watchdog timer control
Figure 1: Core interfaces provided by TinyOS.
stacks, and programming tools. Over one hundred groups
worldwide use it, including several companies within their
products.
This paper details the design and motivation of TinyOS,
including its novel approaches to components and concur-
rency, a qualitative and quantitative evaluation of the oper-
ating system, and the presentation of our experience with
it as a platform for innovation and real applications. This
paper makes the following contributions. First, we present
the design and programming model of TinyOS, including
support for concurrency and flexible composition. Second,
we evaluate TinyOS in terms of its performance, small size,
lightweight concurrency, flexibility, and support for low
power operation. Third, we discuss our experience with
TinyOS, illustrating its design through three applications:
environmental monitoring, object tracking, and a declara-
tive query processor. Our previous work on TinyOS dis-
cussed an early system architecture [30] and language de-
sign issues [24], but did not present the operating system
design in detail, provide an in-depth evaluation, or discuss
our extensive experience with the system over the last sev-
eral years.
Section 2 presents an overview of TinyOS, including
the component and execution models, and the support for
concurrency. Section 3 shows how the design meets our
four requirements. Sections 4 and 5 cover some of the en-
abled innovations and applications, while Section 6 covers
related work. Section 7 presents our conclusions.
mediately while deferring extensive computation to tasks.
While tasks may perform significant computation, their ba-
sic execution model is run-to-completion, rather than to run
indefinitely; this allows tasks to be much lighter-weight
than threads. Tasks represent internal concurrency within
a component and may only access state within that com-
ponent. The standard TinyOS task scheduler uses a non-
preemptive, FIFO scheduling policy; Section 2.3 presents
the TinyOS execution model in detail.
TinyOS abstracts all hardware resources as compo-
nents. For example, calling the getData() command
on a sensor component will cause it to later signal a
dataReady() event when the hardware interrupt fires.
While many components are entirely software-based, the
combination of split-phase operations and tasks makes this
distinction transparent to the programmer. For example,
consider a component that encrypts a buffer of data. In a
hardware implementation, the command would instruct the
encryption hardware to perform the operation, while a soft-
ware implementation would post a task to encrypt the data
on the CPU. In both cases an event signals that the encryp-
tion operation is complete.
The current version of TinyOS provides a large num-
ber of components to application developers, including ab-
stractions for sensors, single-hop networking, ad-hoc rout-
ing, power management, timers, and non-volatile storage.
A developer composes an application by writing compo-
nents and wiring them to TinyOS components that provide
implementations of the required services. Section 2.2 de-
scribes how developers write components and wire them
in nesC. Figure 1 lists a number of core interfaces that are
available to application developers. Many different compo-
nents may implement a given interface.
2 TinyOS
TinyOS has a component-based programming model, cod-
ified by the nesC language [24], a dialect of C. TinyOS
is not an OS in the traditional sense; it is a programming
framework for embedded systems and set of components
that enable building an application-specific OS into each
application. A typical application is about 15K in size, of
which the base OS is about 400 bytes; the largest applica-
tion, a database-like query system, is about 64K bytes.
2.1 Overview
A TinyOS program is a graph of components, each of
which is an independent computational entity that exposes
one or more interfaces . Components have three computa-
tional abstractions: commands , events , and tasks . Com-
mands and events are mechanisms for inter-component
communication, while tasks are used to express intra-
component concurrency.
A command is typically a request to a component to
perform some service, such as initiating a sensor read-
ing, while an event signals the completion of that service.
Events may also be signaled asynchronously, for example,
due to hardware interrupts or message arrival. From a tra-
ditional OS perspective, commands are analogous to down-
calls and events to upcalls. Commands and events cannot
block: rather, a request for service is split phase in that the
request for service (the command) and the completion sig-
nal (the corresponding event) are decoupled. The command
returns immediately and the event signals completion at a
later time.
Rather than performing a computation immediately,
commands and event handlers may post a task , a function
executed by the TinyOS scheduler at a later time. This al-
lows commands and events to be responsive, returning im-
2.2 Component Model
TinyOS’s programming model, provided by the nesC lan-
guage, centers around the notion of components that en-
capsulate a specific set of services, specified by interfaces .
TinyOS itself simply consists of a set of reusable system
components along with a task scheduler. An application
connects components using a wiring specification that is
independent of component implementations. This wiring
specification defines the complete set of components that
the application uses.
The compiler eliminates the penalty of small, fine-
grained components by whole-program (application plus
operating system) analysis and inlining.
Unused compo-
813352609.101.png 813352609.112.png 813352609.123.png 813352609.134.png 813352609.001.png 813352609.012.png 813352609.023.png 813352609.034.png 813352609.045.png 813352609.056.png 813352609.058.png 813352609.059.png 813352609.060.png 813352609.061.png 813352609.062.png 813352609.063.png 813352609.064.png 813352609.065.png 813352609.066.png 813352609.067.png 813352609.068.png 813352609.069.png 813352609.070.png 813352609.071.png 813352609.072.png 813352609.073.png 813352609.074.png 813352609.075.png 813352609.076.png 813352609.077.png 813352609.078.png 813352609.079.png 813352609.080.png 813352609.081.png 813352609.082.png 813352609.083.png
 
module TimerM {
provides {
interface StdControl;
interface Timer[uint8_t id];
StdControl
Timer
configuration TimerC {
provides {
interface StdControl;
interface Timer[uint8_t id];
StdControl
Timer
}
uses interface Clock;
Timer
StdControl
TimerM
}
}
implementation {
... a dialect of C ...
TimerM
Clock
}
implementation {
components TimerM, HWClock;
HWClock
}
Figure 2: Specification and graphical depiction of the
TimerM component. Provided interfaces are shown above the
TimerM component and used interfaces are below. Downward
arrows depict commands and upward arrows depict events.
StdControl = TimerM.StdControl;
Timer = TimerM.Timer;
Clock
TimerM.Clk -> HWClock.Clock;
HWClock
}
TimerC
interface StdControl {
command result_t init();
command result_t start();
command result_t stop();
Figure 4: TinyOS’s Timer Service: the TimerC configura-
tion.
}
interface Timer {
command result_t start(char type, uint32_t interval);
command result_t stop();
event result_t fired();
mands and events. A module declares private state vari-
ables and data buffers, which only it can reference. Config-
urations are used to wire other components together, con-
necting interfaces used by components to interfaces pro-
vided by others. Figure 4 illustrates the TinyOS timer ser-
vice, which is a configuration ( TimerC ) that wires the timer
module ( TimerM ) to the hardware clock component ( HW-
Clock ). Configurations allow multiple components to be
aggregated together into a single “supercomponent” that
exposes a single set of interfaces. For example, the TinyOS
networking stack is a configuration wiring together 21 sep-
arate modules and 10 sub-configurations.
Each component has its own interface namespace,
which it uses to refer to the commands and events that
it uses. When wiring interfaces together, a configuration
makes the connection between the local name of an inter-
face used by one component to the local name of the inter-
face provided by another. That is, a component invokes an
interface without referring explicitly to its implementation.
This makes it easy to perform interpositioning by introduc-
ing a new component in the component graph that uses and
provides the same interface.
Interfaces can be wired multiple times; for example, in
Figure 5 the StdControl interface of Main is wired to
Photo , TimerC , and Multihop . This fan-out is transpar-
ent to the caller. nesC allows fan-out as long as the return
type has a function for combining the results of all the calls.
For example, for result t , this is a logical-AND; a fan-
out returns failure if any subcall fails.
A component can provide a parameterized interface that
exports many instances of the same interface, parameter-
ized by some identifier (typically a small integer). For ex-
ample, the the Timer interface in Figure 2 is parameterized
with an 8-bit id , which is passed to the commands and
events of that interface as an extra parameter. In this case,
the parameterized interface allows the single Timer com-
ponent to implement multiple separate timer interfaces, one
for each client component. A client of a parameterized in-
terface must specify the ID as a constant in the wiring con-
figuration; to avoid conflicts in ID selection, nesC provides
a special unique keyword that selects a unique identifier
for each client.
Every TinyOS application is described by a top-level
configuration that wires together the components used. An
example is shown graphically in Figure 5: SurgeC is a sim-
ple application that periodically ( TimerC ) acquires light
}
interface Clock {
command result_t setRate(char interval, char scale);
event result_t fire();
}
interface SendMsg {
command result_t send(uint16_t address,
uint8_t length,
TOS_MsgPtr msg);
event result_t sendDone(TOS_MsgPtr msg,
result_t success);
}
Figure 3: Sample TinyOS interface types.
nents and functionality are not included in the application
binary. Inlining occurs across component boundaries and
improves both size and efficiency; Section 3.1 evaluates
these optimizations.
A component has two classes of interfaces: those it pro-
vides and those it uses . These interfaces define how the
component directly interacts with other components. An
interface generally models some service (e.g., sending a
message) and is specified by an interface type . Figure 2
shows a simplified form of the TimerM component, part
of the TinyOS timer service, that provides the StdCon-
trol and Timer interfaces and uses a Clock interface (all
shown in Figure 3). A component can provide or use the
same interface type several times as long as it gives each
instance a separate name.
Interfaces are bidirectional and contain both commands
and events . A command is a function that is implemented
by the providers of an interface, an event is a function that
is implemented by its users. For instance, the Timer inter-
face (Figure 3) defines start and stop commands and a
fired event. Although the interaction between the timer
and its client could have been provided via two separate in-
terfaces (one for its commands and another for its events),
grouping them in the same interface makes the specifica-
tion much clearer and helps prevent bugs when wiring com-
ponents together.
nesC has two types of components: modules and config-
urations . Modules provide code and are written in a dialect
of C with extensions for calling and implementing com-
813352609.084.png 813352609.085.png 813352609.086.png 813352609.087.png 813352609.088.png 813352609.089.png 813352609.090.png 813352609.091.png 813352609.092.png 813352609.093.png 813352609.094.png 813352609.095.png 813352609.096.png 813352609.097.png
reachable from tasks.
Asynchronous Code (AC): code that is reach-
able from at least one interrupt handler.
SurgeC
StdControl
SurgeM
ADC
Main
StdControl
Timer
SendMsg
Leds
The traditional OS approach toward AC is to minimize
it and prevent user-level code from being AC. This would
be too restrictive for TinyOS. Component writers need to
interact with a wide range of real-time hardware, which is
not possible in general with the approach of queuing work
for later. For example, in the networking stack there are
components that interface with the radio at the bit level, the
byte level, and via hardware signal-strength indicators. A
primary goal is to allow developers to build responsive con-
current data structures that can safely share data between
AC and SC; components often have a mix of SC and AC
code.
Although non-preemption eliminates races among tasks,
there are still potential races between SC and AC, as well
as between AC and AC. In general, any update to shared
state that is reachable from AC is a potential data race. To
reinstate atomicity in such cases, the programmer has two
options: convert all of the conflicting code to tasks (SC
only), or use atomic sections to update the shared state. An
atomic section is a small code sequence that nesC ensures
will run atomically. The current implementation turns off
interrupts during the atomic section and ensures that it has
no loops. Section 3.2 covers an example use of an atomic
section to remove a data race.
StdControl
ADC
StdControl
Timer
StdControl
Multihop
SendMsg
Leds
LedsC
Photo
TimerC
Figure 5: The top-level configuration for the Surge applica-
tion.
sensor readings ( Photo ) and sends them back to a base sta-
tion using multi-hop routing ( Multihop ).
nesC imposes some limitations on C to improve code ef-
ficiency and robustness. First, the language prohibits func-
tion pointers, allowing the compiler to know the precise
call graph of a program. This enables cross-component
optimizations for entire call paths, which can remove the
overhead of cross-module calls as well as inline code for
small components into its callers. Section 3.1 evaluates
these optimizations on boundary crossing overheads. Sec-
ond, the language does not support dynamic memory al-
location; components statically declare all of a program’s
state, which prevents memory fragmentation as well as run-
time allocation failures. The restriction sounds more oner-
ous than it is in practice; the component abstraction elim-
inates many of the needs for dynamic allocation. In the
few rare instances that it is truly needed (e.g., TinyDB, dis-
cussed in Section 5.3), a memory pool component can be
shared by a set of cooperating components.
The basic invariant nesC
must enforce is as follows:
Race-Free Invariant :Anyupdatetosharedstate
iseitherSC-onlyoroccursinanatomicsection.
The nesC compiler enforces this invariant at compile time,
preventing nearly all data races. It is possible to introduce
a race condition that the compiler cannot detect, but it must
span multiple atomic sections or tasks and use storage in
intermediate variables.
The practical impact of data race prevention is sub-
stantial. First, it eliminates a class of very painful non-
deterministic bugs. Second, it means that composition can
essentially ignore concurrency. It does not matter which
components generate concurrency or how they are wired
together: the compiler will catch any sharing violations at
compile time. Strong compile-time analysis enables a wide
variety of concurrent data structures and synchronization
primitives. We have several variations of concurrent queues
and state machines. In turn, this makes it easy to handle
time-critical actions directly in an event handler, even when
they update shared state. For example, radio events are al-
ways dealt with in the interrupt handler until a whole packet
has arrived, at which point the handler posts a task. Sec-
tion 3.2 contains an evaluation of the concurrency checking
and its ability to catch data races.
2.3 Execution Model and Concurrency
The event-centric domain of sensor networks requires fine-
grain concurrency; events can arrive at any time and must
interact cleanly with the ongoing computation. This is a
classic systems problem that has two broad approaches: 1)
atomically enqueueing work on arrival to run later, as in
Click [41] and most message-passing systems, and 2) ex-
ecuting a handler immediately in the style of active mes-
sages [74]. Because some of these events are time criti-
cal, such as start-symbol detection, we chose the latter ap-
proach. nesC can detect data races statically, which elimi-
nates a large class of complex bugs.
The core of the execution model consists of run-to-
completion tasks that represent the ongoing computation,
and interrupt handlers that are signaled asynchronously by
hardware. Tasks are an explicit entity in the language;
a program submits a task to the scheduler for execution
with the post operator. The scheduler can execute tasks
in any order, but must obey the run-to-completion rule.
The standard TinyOS scheduler follows a FIFO policy,
but we have implemented other policies including earliest-
deadline first.
Because tasks are not preempted and run to completion,
they are atomic with respect to each other. However, tasks
are not atomic with respect to interrupt handlers or to com-
mands and events they invoke. To facilitate the detection
of race conditions, we distinguish synchronous and asyn-
chronous code:
2.4 Active Messages
A critical aspect of TinyOS’s design is its networking archi-
tecture, which we detail here. The core TinyOS communi-
cation abstraction is based on Active Messages (AM) [74],
which are small (36-byte) packets associated with a 1-byte
handler ID. Upon reception of an Active Message, a node
dispatches the message (using an event) to one or more han-
dlers that are registered to receive messages of that type.
Synchronous
Code
(SC):
code
that
is
only
813352609.098.png 813352609.099.png 813352609.100.png 813352609.102.png 813352609.103.png 813352609.104.png 813352609.105.png 813352609.106.png 813352609.107.png 813352609.108.png 813352609.109.png 813352609.110.png 813352609.111.png 813352609.113.png 813352609.114.png 813352609.115.png 813352609.116.png 813352609.117.png 813352609.118.png 813352609.119.png 813352609.120.png 813352609.121.png 813352609.122.png 813352609.124.png
Handler registration is accomplished using static wiring
and a parameterized interface, as described above.
AM provides an unreliable, single-hop datagram proto-
col, and provides a unified communication interface to both
the radio and the built-in serial port (for wired nodes such
as basestations). Higher-level protocols providing multi-
hop communication, larger ADUs, or other features are
readily built on top of the AM interface. Variants of the ba-
sic AM stack exist that incorporate lightweight, link-level
security (see Section 4.1). AM’s event-driven nature and
tight coupling of computation and communication make
the abstraction well suited to the sensor network domain.
Application
Size
Structure
Optimized Unoptimized Reduction
Tasks Events Modules
Blink
683
1791
61%
0
2
8
Blink LEDs
GenericBase
4278
6208
31%
3
21
19
Radio-to-UART packet router
CntToLeds
6121
9449
35%
1
7
13
Display counter on LEDs
CntToRfm
9859
13969
29%
4
31
27
Send counter as radio packet
Habitat monitoring
11415
19181
40%
9
38
32
Periodic environmental sampling
Surge
14794
20645
22%
9
40
34
Ad-hoc multihop routing demo
Mate
2.5 Implementation Status
TinyOS supports a wide range of hardware platforms and
has been used on several generations of sensor motes. Sup-
ported processors include the Atmel AT90L-series, Atmel
ATmega-series, and Texas Instruments MSP-series proces-
sors. TinyOS includes hardware support for the RFM
TR1000 and Chipcon CC1000 radios, as well as as well
as several custom radio chipsets. TinyOS applications may
be compiled to run on any of these platforms without mod-
ification. Work is underway (by others) to port TinyOS
to ARM, Intel 8051 and Hitachi processors and to support
Bluetooth radios.
TinyOS supports an extensive development environ-
ment that incorporates visualization, debugging, and sup-
port tools as well as a fine-grained simulation environment.
Desktops, laptops, and palmtops can serve as proxies be-
tween sensor networks and wired networks, allowing inte-
gration with server side tools implemented in Java, C, or
MATLAB, as well as interfaces to database engines such
as PostgreSQL. nesC includes a tool that generates code to
marshal between Active Message packet formats and Java
classes.
TinyOS includes TOSSIM, a high-fidelity mote simula-
tor that compiles directly from TinyOS nesC code, scaling
to thousands of simulated nodes. TOSSIM gives the pro-
grammer an omniscient view of the network and greater
debugging capabilities. Server-side applications can con-
nect to a TOSSIM proxy just as if it were a real sensor
network, easing the transition between the simulation en-
vironment and actual deployments. TinyOS also provides
JTAG support integrated with gdb for debugging applica-
tions directly on the mote.
23741
25907
8%
15
51
39
Small virtual machine
Object tracking
23525
37195
36%
15
39
32
Track object in sensor field
TinyDB
63726
71269
10%
18
193
91
SQL-like query interface
Figure 6: Size and structure of selected TinyOS applications.
Absolute Size: A TinyOS program’s component graph de-
fines which components it needs to work. Because compo-
nents are resolved at compile time, compiling an applica-
tion builds an application-specific version of TinyOS: the
resulting image contains exactly the required OS services.
As shown in Figure 6, TinyOS and its applications are
small. The base TinyOS operating system is less than
400 bytes and associated C runtime primitives (including
floating-point libraries) fit in just over 1KB. Blink repre-
sents the footprint for a minimal application using the base
OS and a primitive hardware timer. CntToLeds incorpo-
rates a more sophisticated timer service which requires ad-
ditional memory. GenericBase captures the footprint of
the radio stack while CntToRfm incorporates both the ra-
dio stack and the generic timer, which is the case for many
real applications. Most applications fit in less than 16KB,
while the largest TinyOS application, TinyDB, fits in about
64KB.
Footprint Optimization: TinyOS goes beyond standard
techniques to reduce code size (e.g., stripping the symbol
table). It uses whole-program compilation to prune dead
code, and cross-component optimizations remove redun-
dant operations and module-crossing overhead. Figure 6
shows the reduction in size achieved by these optimizations
on a range of applications. Size improvements range from
8% for Mate, to 40% for habitat monitoring, to over 60%
for simple applications.
Component Overhead: To be efficient, TinyOS must min-
imize the overhead for module crossings. Since there are
no virtual functions or address-space crossings, the basic
boundary crossing is at most a regular procedure call. On
Atmel-based platforms, this costs about eight clock cycles.
Using whole-program analysis, nesC removes many of
these boundary crossings and optimizes entire call paths by
applying extensive cross-component optimizations, includ-
ing constant propagation and common subexpression elim-
ination. For example, nesC can typically inline an entire
component into its caller.
In the TinyOS timer component, triggering a timer event
3 Meeting the Four Key Requirements
In this section, we show how the design of TinyOS, particu-
larly its component model and execution model, addresses
our four key requirements: limited resources, reactive con-
currency, flexibility and low power. This section quantifies
basic aspects of resource usage and performance, including
storage usage, execution overhead, observed concurrency,
and effectiveness of whole-system optimization.
3.1 Limited Resources
We look at three metrics to evaluate whether TinyOS ap-
plications are lightweight in space and time: (1) the foot-
print of real applications should be small, (2) the compiler
should reduce code size through optimization, and (3) the
overhead for fine-grain modules should be low.
813352609.125.png 813352609.126.png 813352609.127.png 813352609.128.png 813352609.129.png 813352609.130.png 813352609.131.png 813352609.132.png 813352609.133.png 813352609.135.png 813352609.136.png 813352609.137.png 813352609.138.png 813352609.139.png 813352609.140.png 813352609.141.png 813352609.142.png 813352609.143.png 813352609.144.png 813352609.002.png 813352609.003.png 813352609.004.png 813352609.005.png 813352609.006.png 813352609.007.png 813352609.008.png 813352609.009.png 813352609.010.png 813352609.011.png 813352609.013.png 813352609.014.png 813352609.015.png 813352609.016.png 813352609.017.png 813352609.018.png 813352609.019.png 813352609.020.png 813352609.021.png 813352609.022.png 813352609.024.png 813352609.025.png 813352609.026.png 813352609.027.png 813352609.028.png 813352609.029.png 813352609.030.png 813352609.031.png 813352609.032.png 813352609.033.png 813352609.035.png 813352609.036.png 813352609.037.png 813352609.038.png 813352609.039.png 813352609.040.png 813352609.041.png 813352609.042.png 813352609.043.png 813352609.044.png 813352609.046.png 813352609.047.png 813352609.048.png 813352609.049.png 813352609.050.png 813352609.051.png 813352609.052.png 813352609.053.png 813352609.054.png 813352609.055.png 813352609.057.png
 
Zgłoś jeśli naruszono regulamin