Chapter 2: ACE Middleware Framework
Now that we understand the config/logging foundation, let's look at the framework that middleware services are built on.
Why Does ACE Exist?
Imagine you're building a smart home hub. You have a Wi-Fi process, a Bluetooth process, a cloud-sync process — all running at the same time, all needing to talk to each other. Without a shared framework, every team would invent its own threading model, its own IPC protocol, its own event system. Chaos.
ACE (Amazon Common Embedded) solves this by providing four interlocking building blocks:
| Layer | Analogy | Purpose |
|---|---|---|
| OSAL | The foundation of a building | Portable OS primitives (threads, mutexes, shared memory) |
| Dispatcher | A post office sorting room | Work queues that route messages to the right handler |
| EventMgr | A newsletter subscription | Module-scoped publish/subscribe for event distribution |
| AIPC | A phone call between offices | High-performance shared-memory RPC between processes |
Let's explore each one, bottom-up.
2.1 OSAL — The Portable Foundation
📁
middleware/.../iotmi-ace-general/dpk/osal/common/include/ace/
OSAL stands for Operating System Abstraction Layer. Think of it as a universal adapter plug — your code calls aceThread_create() and OSAL translates that into the right system call whether you're on Linux, FreeRTOS, or something else entirely.
What's Inside?
| Header | Provides |
|---|---|
osal_threads.h |
Thread creation, priorities, joining |
osal_mutex.h |
Mutexes (including recursive) |
osal_semaphore.h |
Counting and binary semaphores |
osal_shmem.h |
Named shared memory regions |
osal_mq.h |
Message queues |
osal_alloc.h |
Memory allocation |
Creating a Thread
aceThread_t thread;
aceThread_create(&thread, "my_worker",
my_func, ctx,
stack_size, ACE_PRIORITY_NORMAL);
Protecting Shared State
static aceMutex_t lock = ACE_MUTEX_INITIALIZER;
aceMutex_acquire(&lock);
// ... critical section ...
aceMutex_release(&lock);
Creating Shared Memory
Shared memory is the backbone of AIPC's speed. OSAL makes it portable:
💡 Key insight: Every higher-level ACE component (Dispatcher, EventMgr, AIPC) is built on top of these OSAL primitives. If you understand this layer, the rest follows naturally.
2.2 Dispatcher — The Work Queue Manager
📁
middleware/.../iotmi-ace-general/framework/dispatcher/include/ace/dispatcher_core.h
The Dispatcher is like a post office sorting room. Modules register themselves, then anyone can drop a message into the queue. The Dispatcher's internal thread picks up each message and delivers it to the right module's callback.
Why Not Just Use Raw Threads?
Raw threads are hard to manage — you'd need to handle synchronization, priority, and lifecycle yourself. The Dispatcher gives you a managed work queue with a dedicated thread pool, so modules just say "here's my work" and walk away.
Lifecycle
- Initialize the dispatcher subsystem
- Create a dispatcher instance (thread + queue)
- Register modules that want to receive work
- Post messages — the dispatcher delivers them
- Destroy when done
Registering a Module
aceDispatcher_module_t mod = {
.mod_name = "my_module",
.on_msg = my_message_handler,
.on_reg = my_init_callback,
};
aceDispatcher_registerModule(&dp, &mod, &handle);
Posting Work
When the dispatcher dequeues this message, it calls mod.on_msg(EVENT_ID, &msg, sizeof(msg), ctx) on the dispatcher's thread — no manual synchronization needed.
💡 Key insight: The Dispatcher decouples who produces work from who consumes it. A module never needs to know which thread will execute its handler.
2.3 EventMgr — Publish/Subscribe Events
📁
middleware/.../iotmi-ace-general/framework/eventmgr/include/ace/eventmgr_api.h
If the Dispatcher is a post office, EventMgr is a newsletter subscription service. Publishers announce events; subscribers receive them — without either side knowing about the other.
The Three-Step Dance
Events are organized into a hierarchy: Module → Group → Event. A Wi-Fi module might publish a "connected" event under its "status" group.
Step 1: Publisher registers
Step 2: Subscriber registers and subscribes
aceEventMgr_subscribeHandle_t sub;
aceEventMgr_registerSubscriber(
MY_MODULE_ID, ¶ms, &sub);
aceEventMgr_subscribe(&sub, GROUP_STATUS, EVT_ALL);
Step 3: Publisher fires an event
aceEventMgr_publishParams_t pp = {0};
aceEventMgr_setPublishParams(data, len, &pp);
aceEventMgr_publish(&pub, GROUP_STATUS, EVT_CONN, &pp);
All registered subscribers for that module/group/event combination receive a callback — automatically.
⚠️ Performance note: Subscriber callbacks are treated like ISRs. Keep them under 1024 bytes of stack and minimize CPU work inside them.
2.4 AIPC — Shared-Memory RPC
📁
middleware/.../iotmi-ace-general/framework/aipc/include/ace/aipc_api.h
AIPC (ACE IPC) is the crown jewel. It lets one process call a function in another process as if it were local — using shared memory for speed and automatic marshalling so you don't have to serialize anything by hand.
Think of it as a phone call between offices: the client dials a service, makes a request, and gets a response — all without knowing the internal details of the other side.
Server Side
aceAipc_serverConfig_t config = {0};
// ... configure service name, handler ...
aceAipc_Handler_t server;
aceAipc_start(&server, &config);
Client Side
Making an RPC Call
aceAipc_rpcSyncParams_t params = {0};
// ... set function ID, input buffer ...
aceAipc_rpcSync(client, ¶ms);
// params now contains the response
AIPC also supports async RPC (aceAipc_rpcAsync) and event subscription (aceAipc_subscribeEvent / aceAipc_publishEvent) for push-style notifications from server to client.
AIPC Call Flow
sequenceDiagram
participant Client
participant AIPC
participant SharedMem
participant Server
Client->>AIPC: aceAipc_rpcSync(params)
AIPC->>SharedMem: Marshal request
SharedMem->>Server: Dequeue & dispatch
Server->>SharedMem: Write response
SharedMem->>AIPC: Demarshal response
AIPC->>Client: Return result
💡 Why shared memory? Traditional IPC (pipes, sockets) copies data between kernel and user space — twice per direction. Shared memory eliminates those copies entirely, which matters a lot on resource-constrained IoT devices.
2.5 Error Handling Across ACE
📁
middleware/.../iotmi-ace-general/framework/core/include/ace/ace_status.h
Every ACE function returns an ace_status_t. The most common codes you'll encounter:
| Code | Value | Meaning |
|---|---|---|
ACE_STATUS_OK |
0 | Success |
ACE_STATUS_BAD_PARAM |
-11 | Invalid argument |
ACE_STATUS_NULL_POINTER |
-9 | Null pointer passed |
ACE_STATUS_OUT_OF_MEMORY |
-4 | Allocation failed |
ACE_STATUS_NOT_FOUND |
-8 | Resource doesn't exist |
ACE_STATUS_BUSY |
-15 | Resource in use |
ACE_STATUS_TIMEOUT |
-2 | Operation timed out |
The pattern is always the same:
Module-specific error ranges are reserved (e.g., AIPC uses -500 to -599, Dispatcher uses -800 to -899), so you can always tell which subsystem produced an error.
How It All Fits Together
Here's the mental model: OSAL provides the raw building materials (threads, locks, shared memory). The Dispatcher organizes work into managed queues. EventMgr adds decoupled pub/sub on top. And AIPC ties it all together for cross-process communication using shared memory for speed.
┌─────────────────────────────────────────┐
│ Your Application │
├──────────┬──────────┬───────────────────┤
│ AIPC │ EventMgr │ Dispatcher │
│ (RPC) │ (pub/sub)│ (work queues) │
├──────────┴──────────┴───────────────────┤
│ OSAL │
│ threads · mutexes · shmem · queues │
├─────────────────────────────────────────┤
│ OS (Linux / FreeRTOS) │
└─────────────────────────────────────────┘
What's Next?
In Chapter 3: IPC Framework, we'll zoom into how the SDK builds its own IPC layer on top of AIPC — defining specific service interfaces, message schemas, and the client/server patterns that higher-level features like device provisioning and cloud sync rely on.
Previous: Chapter 1 — Configuration, Logging & Certificate Management