Sunday, January 22, 2012

About QNX


 POSIX API
complete use of MMU hardware.
two fundamental principles:
  ・microkernel architecture
  ・message-based interprocess communication

----
 A microkernel OS is structured as a tiny kernel that provides the minimal services used by a team of optional cooperating processes, which in turn provide the higher-level OS functionality. The microkernel itself lacks filesystems and many other services normally expected of an OS - those services are provided by optional processes.
----
Unlike threads, the microkernel itself is never scheduled for execution. The processor executes code in the microkernel only as the result of an explicit kernel call, an exception, or in response to a hardware interrupt.

message-passing services ? the microkernel handles the routing of all messages between all threads throughout the entire system.
scheduling services ? the microkernel schedules threads for execution using the various POSIX realtime scheduling policies.
timer services ? the microkernel provides the rich set of POSIX timer services.
process management services ? the microkernel and the process manager together form a unit (called procnto). The process manager portion is responsible for managing processes, memory, and the pathname space.
----
 The only real difference between system services and applications is that OS services manage resources for clients.
---
In QNX Neutrino, a message is a parcel of bytes passed from one process to another. The OS attaches no special meaning to the content of a message ? the data in a message has meaning for the sender of the message and for its receiver, but for no one else.

Message passing not only allows processes to pass data to each other, but also provides a means of synchronizing the execution of several processes. As they send, receive, and reply to messages, processes undergo various “changes of state” that affect when, and for how long, they may run. Knowing their states and priorities, the microkernel can schedule all processes as efficiently as possible to make the most of available CPU resources. This single, consistent method ? message-passing ? is thus constantly operative throughout the entire system.
----
A thread can be thought of as the minimum “unit of execution,” the unit of scheduling and execution in the microkernel. A process, on the other hand, can be thought of as a “container” for threads, defining the “address space” within which threads will execute. A process will always contain at least one thread.

The OS can be configured to provide a mix of threads and processes (as defined by POSIX). Each process is MMU-protected from each other, and each process may contain one or more threads that share the process's address space.

http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_getting_started/s1_procs.html

http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_prog/process.html

Although threads within a process share everything within the process's address space, each thread still has some “private” data. In some cases, this private data is protected within the kernel (e.g. the tid or thread ID), while other private data resides unprotected in the process's address space (e.g. each thread has a stack for its own use). Some of the more noteworthy thread-private resources are:

----
http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_sys_arch/kernel.html#Priority_inheritance_mutexes

http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_sys_arch/ipc.html#Priority_inheritance_messages
--
Scheduling policies
To meet the needs of various applications, QNX Neutrino provides these scheduling algorithms:

FIFO scheduling
round-robin scheduling
sporadic scheduling
----
Note that in general, mutexes are much faster than semaphores, which always require a kernel entry. Semaphores don't affect a thread's effective priority; if you need priority inheritance, use a mutex.
--
Interrupt latency
Interrupt latency is the time from the assertion of a hardware interrupt until the first instruction of the device driver's interrupt handler is executed.

Worst-case interrupt latency will be this time plus the longest time in which the OS, or the running system process, disables CPU interrupts.
--
Scheduling latency
In some cases, the low-level hardware interrupt handler must schedule a higher-level thread to run. In this scenario, the interrupt handler will return and indicate that an event is to be delivered. This introduces a second form of latency ? scheduling latency ? which must be accounted for.

Scheduling latency is the time between the last instruction of the user's interrupt handler and the execution of the first instruction of a driver thread. This usually means the time it takes to save the context of the currently executing thread and restore the context of the required driver thread. Although larger than interrupt latency, this time is also kept small in a QNX Neutrino system.
--
When the hardware interrupt occurs, the processor will enter the interrupt redirector in the microkernel. This code pushes the registers for the context of the currently running thread into the appropriate thread table entry and sets the processor context such that the ISR has access to the code and data that are part of the thread the ISR is contained within. This allows the ISR to use the buffers and code in the user-level thread to resolve the interrupt and, if higher-level work by the thread is required, to queue an event to the thread the ISR is part of, which can then work on the data the ISR has placed into thread-owned buffers.

Since it runs with the memory-mapping of the thread containing it, the ISR can directly manipulate devices mapped into the thread's address space, or directly perform I/O instructions. As a result, device drivers that manipulate hardware don't need to be linked into the kernel.
----
This inherent blocking synchronizes the execution of the sending thread, since the act of requesting that the data be sent also causes the sending thread to be blocked and the receiving thread to be scheduled for execution. This happens without requiring explicit work by the kernel to determine which thread to run next (as would be the case with most other forms of IPC). Execution and data move directly from one context to another.

A server process receives messages and pulses in priority order. As the threads within the server receive requests, they then inherit the priority (but not the scheduling policy) of the sending thread. As a result, the relative priorities of the threads requesting work of the server are preserved, and the server work will be executed at the appropriate priority. This message-driven priority inheritance avoids priority-inversion problems.
----
Once you have a file descriptor to a shared-memory object, you use the mmap() function to map the object, or part of it, into your process's address space. The mmap() function is the cornerstone of memory management within QNX Neutrino
----

No comments:

Post a Comment