Tuesday, January 31, 2012

What is EtherCAT?

EtherCAT - Ethernet for Control Automation Technology - is an open high performance Ethernet-based fieldbus system. The development goal of EtherCAT was to apply Ethernet to automation applications which require short data update times (also called cycle times) with low communication jitter (for synchronization purposes) and low hardware costs.


http://en.wikipedia.org/wiki/EtherCAT

For Electronics Circuit Design to start with

Do you know any FPGA vendors other than Xilinx and ALTERA?

low-tier FPGA companies: Lattice Semiconductor, SiliconBlue, Acrhonix, Quicklogic
start-ups: InPa

In that an FPGA vendor called Actel is bought by Microsemi.

Microsemi FPGA named product of the year
http://dangerousprototypes.com/2011/01/22/microsemi-fpga-named-product-of-the-year

Monday, January 30, 2012

Windows on ARM processors?

Yes, the next version of Windows7 will be Windows8 and it runs on ARM processors. But, the demo of Windows8 on ARM processors are tightly controlled by MicroSoft. We can expect many things new in Windows8.

Wednesday, January 25, 2012

do you know dynamically loaded libraries?

Apart from Static and Shared Libraries, there is a third type of libraries that are used by programs. These are called dynamically loaded libraries. These libraries are built as normal shared or statically linked libraries.

The difference is that they are not loaded at program startup, instead you would use the dlsym() and dlopen() application programming interface to activate the library. This is how you get web browser plugin's, modules (Apache), or just in time compilers to work. When you are done using the library, you would call dlclose() to remove the library from memory. Errors are handled via the dlerror() application programming interface.

See Dynamic Loading:

 http://en.wikipedia.org/wiki/Dynamic_loading

Dynamic loading is a mechanism by which a computer program can, at run time, load a library (or other binary) into memory, retrieve the addresses of functions and variables contained in the library, execute those functions or access those variables, and unload the library from memory. Unlike static linking and loadtime linking, this mechanism allows a computer program to startup in the absence of these libraries, to discover available libraries, and to potentially gain additional functionality.

What does /sbin mean?

sbin is short for static binaries.Those are the executable binaries statically linked with static libraries.

Sunday, January 22, 2012

About QNX


 POSIX API
complete use of MMU hardware.
two fundamental principles:
  ・microkernel architecture
  ・message-based interprocess communication

----
 A microkernel OS is structured as a tiny kernel that provides the minimal services used by a team of optional cooperating processes, which in turn provide the higher-level OS functionality. The microkernel itself lacks filesystems and many other services normally expected of an OS - those services are provided by optional processes.
----
Unlike threads, the microkernel itself is never scheduled for execution. The processor executes code in the microkernel only as the result of an explicit kernel call, an exception, or in response to a hardware interrupt.

message-passing services ? the microkernel handles the routing of all messages between all threads throughout the entire system.
scheduling services ? the microkernel schedules threads for execution using the various POSIX realtime scheduling policies.
timer services ? the microkernel provides the rich set of POSIX timer services.
process management services ? the microkernel and the process manager together form a unit (called procnto). The process manager portion is responsible for managing processes, memory, and the pathname space.
----
 The only real difference between system services and applications is that OS services manage resources for clients.
---
In QNX Neutrino, a message is a parcel of bytes passed from one process to another. The OS attaches no special meaning to the content of a message ? the data in a message has meaning for the sender of the message and for its receiver, but for no one else.

Message passing not only allows processes to pass data to each other, but also provides a means of synchronizing the execution of several processes. As they send, receive, and reply to messages, processes undergo various “changes of state” that affect when, and for how long, they may run. Knowing their states and priorities, the microkernel can schedule all processes as efficiently as possible to make the most of available CPU resources. This single, consistent method ? message-passing ? is thus constantly operative throughout the entire system.
----
A thread can be thought of as the minimum “unit of execution,” the unit of scheduling and execution in the microkernel. A process, on the other hand, can be thought of as a “container” for threads, defining the “address space” within which threads will execute. A process will always contain at least one thread.

The OS can be configured to provide a mix of threads and processes (as defined by POSIX). Each process is MMU-protected from each other, and each process may contain one or more threads that share the process's address space.

http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_getting_started/s1_procs.html

http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_prog/process.html

Although threads within a process share everything within the process's address space, each thread still has some “private” data. In some cases, this private data is protected within the kernel (e.g. the tid or thread ID), while other private data resides unprotected in the process's address space (e.g. each thread has a stack for its own use). Some of the more noteworthy thread-private resources are:

----
http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_sys_arch/kernel.html#Priority_inheritance_mutexes

http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_sys_arch/ipc.html#Priority_inheritance_messages
--
Scheduling policies
To meet the needs of various applications, QNX Neutrino provides these scheduling algorithms:

FIFO scheduling
round-robin scheduling
sporadic scheduling
----
Note that in general, mutexes are much faster than semaphores, which always require a kernel entry. Semaphores don't affect a thread's effective priority; if you need priority inheritance, use a mutex.
--
Interrupt latency
Interrupt latency is the time from the assertion of a hardware interrupt until the first instruction of the device driver's interrupt handler is executed.

Worst-case interrupt latency will be this time plus the longest time in which the OS, or the running system process, disables CPU interrupts.
--
Scheduling latency
In some cases, the low-level hardware interrupt handler must schedule a higher-level thread to run. In this scenario, the interrupt handler will return and indicate that an event is to be delivered. This introduces a second form of latency ? scheduling latency ? which must be accounted for.

Scheduling latency is the time between the last instruction of the user's interrupt handler and the execution of the first instruction of a driver thread. This usually means the time it takes to save the context of the currently executing thread and restore the context of the required driver thread. Although larger than interrupt latency, this time is also kept small in a QNX Neutrino system.
--
When the hardware interrupt occurs, the processor will enter the interrupt redirector in the microkernel. This code pushes the registers for the context of the currently running thread into the appropriate thread table entry and sets the processor context such that the ISR has access to the code and data that are part of the thread the ISR is contained within. This allows the ISR to use the buffers and code in the user-level thread to resolve the interrupt and, if higher-level work by the thread is required, to queue an event to the thread the ISR is part of, which can then work on the data the ISR has placed into thread-owned buffers.

Since it runs with the memory-mapping of the thread containing it, the ISR can directly manipulate devices mapped into the thread's address space, or directly perform I/O instructions. As a result, device drivers that manipulate hardware don't need to be linked into the kernel.
----
This inherent blocking synchronizes the execution of the sending thread, since the act of requesting that the data be sent also causes the sending thread to be blocked and the receiving thread to be scheduled for execution. This happens without requiring explicit work by the kernel to determine which thread to run next (as would be the case with most other forms of IPC). Execution and data move directly from one context to another.

A server process receives messages and pulses in priority order. As the threads within the server receive requests, they then inherit the priority (but not the scheduling policy) of the sending thread. As a result, the relative priorities of the threads requesting work of the server are preserved, and the server work will be executed at the appropriate priority. This message-driven priority inheritance avoids priority-inversion problems.
----
Once you have a file descriptor to a shared-memory object, you use the mmap() function to map the object, or part of it, into your process's address space. The mmap() function is the cornerstone of memory management within QNX Neutrino
----

Why Mutex is faster than semaphore?

I do not think that this is true for all OS.
In some OS, It may be true because of the implementation. For example, in QNX, it is written as follows:

--------------------------------------------------------------------------------
 Note that in general, mutexes are much faster than semaphores, which always require a kernel entry. Semaphores don't affect a thread's effective priority; if you need priority inheritance, use a mutex. For more information, see "Mutexes: mutual exclusion locks," earlier in this chapter.
--------------------------------------------------------------------------------
On most processors, acquisition of a mutex doesn't require entry to the kernel for a free mutex. What allows this is the use of the compare-and-swap opcode on x86 processors and the load/store conditional opcodes on most RISC processors.
Entry to the kernel is done at acquisition time only if the mutex is already held so that the thread can go on a blocked list; kernel entry is done on exit if other threads are waiting to be unblocked on that mutex. This allows acquisition and release of an uncontested critical section or resource to be very quick, incurring work by the OS only to resolve contention.
--------------------------------------------------------------------------------

Since because of the nature of the Mutex, only one task can aquire and only the owner can unlock it, the implementation may be very simple. Just flip a bit between 0 and 1, using hardware instructions.
Since the scope of the semaphore is larger, such as number of counts and waking up of other tasks, the implementation may be little complex and may consume more time than mutex.

What kind of scheduling policies?

uITRON supports only priority based pre-emptive scheduling. However, Round-Robin scheduling can be implemented by the application through assigning the tasks the same priority and calling rot_rdq periodically from cyclic handler etc.

QNX supports
FIFO Scheduling→Priority based pre-emptive
RoundRobin→Timeslice
Sporadic(spuraadic) scheduling→thread's priority oscillates dynamically between a foreground or normal priority and a background or low priority according to the execution time.This behavior is essential when Rate Monotonic Analysis (RMA) is being performed on a system that services both periodic and aperiodic events. Essentially, this algorithm allows a thread to service aperiodic events without jeopardizing the hard deadlines of other threads or processes in the system.

http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_sys_arch/kernel.html#SCHEDULING

See the below post for adaptive scheduling.
----
Adaptive partition schedulers are a relatively new type of partition scheduler, pioneered with the most recent version of the QNX operating system. Adaptive partitioning, or AP, allows the real-time system designer to request that a percentage of processing resources be reserved for a particular partition (group of threads and/or processes making up a subsystem). The operating system's priority-driven pre-emptive scheduler will behave in the same way that a non-AP system would until the system is overloaded (i.e. system-wide there is more computation to perform than the processor is capable of sustaining over the long term). During overload, the AP scheduler enforces hard limits on total run-time for the subsystems within a partition, as dictated by the allocated percentage of processor bandwidth for the particular partition.

If the system is not overloaded, a partition that is allocated (for example) 10% of the processor bandwidth, can, in fact, use more than 10%, as it will borrow from the spare budget of other partitions (but will be required to pay it back later). This is very useful for the non real-time subsystems that experience variable load, since these subsystems can make use of spare budget from hard real-time partitions in order to make more forward progress than they would in a fixed partition scheduler such as ARINC-653, but without impacting the hard real-time subsystems' deadlines.
 ------

Difference between process and threads

In general context, Processes are completely separated from each other in each aspect such as Execution Context(CPU register contents), MMU configuration(Memory space is protected from each other by MMU protection and Virtual address space too has to be reconfigured), address space(Global variables), stack memory and other resources. So, switching between processes consumes a lot of effort such as changing the CPU context and MMU reconfiguration.
But, threads are just different execution units within a process. They just differ in the code what they execute. It means they just differ in the CPU register contents and stack. All other, MMU configuration, Global address space and all other resources are same and shared between them. So, switching between threads require only changing of CPU register contents.

What is adaptive partitioning scheduler?

Difference between SH-2, SH-3, SH-3 and RX


How shared libraries are referred to in run-time?

How modules are linked at run time in Linux?

Anatomy of Linux loadable kernel modules
A 2.6 kernel perspective

http://www.ibm.com/developerworks/linux/library/l-lkm/

What is Copy-On-Write?

Tuesday, January 17, 2012

Static priority scheduling and Dynamic priority scheduling

 Static
----
http://www.intechopen.com/articles/show/title/a-fixed-priority-scheduling-algorithm-for-multiprocessor-real-time-systems

Dynamic
----
Dynamic priority scheduling - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Dynamic_priority_scheduling
Earliest deadline first scheduling - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling

Requirements for Hard Real-Time systems.

Minimal Latency during task-switching
Minimal jitter
Run-to-completion
Preemptive multitasking
Priority inheritence
Meet strict deadlines

You at any case meeting the customer deadline is Hard real time system. Mik san who get the and spends time in customer supports, and missing the customer deadline is soft real time system. How do you perform? What is the difference between you people?

Even customer support processing time also taken into consideration. And, finished within a finite time and deadline project is taken at any case. Sometimes, lower priority interrupts are even stopped just by reading the mail and postponing the response later. If the support is related to deadline project, that is taken into it.

Processing the interrupts (Customer support) with long time-slices and missing the deadline.

My OS is Hard Real-Time operating system? At any case, it will finish the higher priority tasks/higher priority interrupts.

Hard real time is Not about High speed or low latency time. It is about deterministic behavior of the kernel. Execution time of all components are constant. Nothing is variable. For example, definition of Task switching time.

----

In computer science, rate-monotonic scheduling is a scheduling algorithm used in real-time operating systems with a static-priority scheduling class. The static priorities are assigned on the basis of the cycle duration of the job: the shorter the cycle duration is, the higher is the job's priority.

These operating systems are generally preemptive and have deterministic guarantees with regard to response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling guarantees for a particular application.
----
Applicable to Rate monotonic Analysis.
----

Software vendor provides the way to specify the resource and real time restrictions as parameters.
Users guarantees the real time behavior of whole system using these parameters
----

So, hard real time systems are where RMA can be applied.
1) Run-to-completion
2) Priority Pre-emptive
4) Execution time(cycles) of all jobs are determined

Since the execution time(cycles) of each and every RTOS object(system calls, task switching time, interrupt processing) time is fixed, thus every job execution time is fixed. The Jitter caused by task scheduler and Interrupt processing are neglected(near to zero).

With these, apply the RMA using the parameters, number of tasks, total execution time of the job(Cj), period of the job(How long once it has to be executed) where before the next cycle it has to be completed.

-----

In this case, the scheduling algorithm is the method in which priorities are assigned. Most algorithms are classified as static priority, dynamic priority, or mixed priority. A static-priority algorithm assigns all priorities at design time, and those priorities remain constant for the lifetime of the task. A dynamic-priority algorithm assigns priorities at runtime, based on execution parameters of tasks, such as upcoming deadlines. A mixed-priority algorithm has both static and dynamic components. Needless to say, static-priority algorithms tend to be simpler than algorithms that must compute priorities on the fly.

The rate monotonic algorithm (RMA) is a procedure for assigning fixed priorities to tasks to maximize their "schedulability." A task set is considered schedulable if all tasks meet all deadlines all the time.  So, ITRON has static priority and RMA has to be used to achieve hard-real time.
-------------

What is Kernel Jitter?

A Jitter-Free Kernel for Hard Real-Time Systems

Christo Angelov, Jesper Berthing

Abstract. The paper presents advanced task management techniques featuring Boolean vectors and bitwise vector operations on kernel data structures in the context of the HARTEXTM hard real-time kernel. These techniques have been consistently applied to all aspects of task management and interaction. Hence, the execution time of system functions no longer depends on the number of tasks involved, resulting in predictable, jitter-free kernel operation. This approach has been further extended to time management resulting in a new type of kernel component, which can be used to implement timed multitasking - a novel technique providing for jitter-free execution of hard real-time tasks.

predictable dynamic scheduling is more promising but it requires the development of a new generation of safe real-time kernels, which provide a secure and predictable environment for application tasks through predictable task scheduling and interaction, extensive timing and monitoring facilities, and last but not least - predictable behaviour of the kernel itself. Such functionality cannot be efficiently accomplished using conventional kernel algorithms and data structures, i.e. linked lists used to implement system queues. Extensive linked list processing introduces substantial and largely varying overhead known as kernel jitter [4].

highly deterministic (jitter-free)....
------
The interrupts in the middle of execution of a Task or unexpected processing overhead which decreases the deterministic behaviour is called Jitter.