Ineda’s SoCs are built on a powerful new architecture called Hierarchical computing architecture (HCA) that is aimed at extending the battery life while allowing the devices to be in the always-on condition to achieve contextual computing. This HCA allows multiple CPUs to run independently while sharing the onboard resources to offer a unified application experience to the user. It enables optimal use of CPUs, peripherals, accelerators and memory banks based on use case. Optimal use of resources along with sophisticated on-chip power management leads to a power-effcient performance. HCA architecture supported by I/O virtualization allows various peripherals to be shared across the multiple CPUs (like display sharing, storage sharing etc.)
As shown in the application example below, a given work load running on different processors gives different power performance. With HCA, the appropriate processor is chosen to match the workload that results in an optimal power and processing performance. The 4-tiered HCA architecture of Ineda’s SoCs ensures that processing performance and memory resources are scaled to meet the workload needs. It also allows effcient handling of idle states of high power processors.
Based on the workload, different portions of the SoC can be turned ON or OFF. The 4-tiered SoC architecture consists of:
1. Policy Manager: Always-ON Subsystem
2. Microcontroller (or MCU): Sensor Hub CPU Subsystem
3. Microprocessor (or MPU): Simple Applications CPU Subsystem
4. Application Processor (or APU): Rich Applications CPU Subsystem
In the example application illustrated above, the Policy Manager takes care of data acquisition, and initial filtering of local sensor data. The MCU manages the actual processing of the local sensor data as well as simple user interface. The MPU controls richer graphical user interface, and aggregates data from wireless detectors. The APU unit delivers advanced human user interface (based on Android) and connects to the cloud via Wi-Fi.
In computing systems, virtualization is a technology that is generally implemented to meet the growing demands of user applications. Virtualization allows a platform to run multiple operating systems and applications at the separate partitions of SoCs., thus allowing a computing system to function as multiple "virtual" systems. In addition, each of the virtual systems may be isolated from each other and can function independently.
Lately virtualization has also been extended to cover I/O virtualization. I/O virtualization is a methodology where I/O devices can be shared across multiple hosts. In systems supporting I/O virtualization, address remapping is generally used to enable assignment of the I/O devices to the host processor.
Ineda's I/O Virtualization technology enables seamless sharing of peripherals such as Display, SD, NAND, Audio, video etc., among the two or more heterogeneous CPU and OS.
The Peripheral virtualization in Ineda’s SoCs is achieved through a combination of customized hardware and software and is specific to each peripheral.
Ineda’s SoCs are designed for power-sensitive wearable and IoT applications. Typical wearable/IoT applications require running high-end operating systems such as Android or Linux in a power-effcient manner. There are various design techniques implemented in the Ineda’s SoCs that bring the overall power consumption down to 1/10th of typical power consumed by a BIG-little implementation of application processors. These design techniques include:
- Architecture-Level:
> Contributed by Hierarchical computing and I/O virtualization technologies
- Hardware System-Level:
> Power and clock gating, multiple power domains, adaptive power scaling of memory and peripherals.
- Software APIs:
> Power and performance states selection of System, CPU and Peripheral States
> On-demand clocking and frequency scaling controls
The Power states are mainly classified into three categories; Peripheral/Device Power states (denoted as D-states), CPU Power states (denoted as C-states) and the system power states (denoted as S-states). The D-states and C-states are almost similar except for possible H/W event triggered transition from low power states to active states in case of C-states. The system power states control the frequency scaling and power gating for each of the component in the SoC.
A separate power management block inside the Always-ON Sub-system handles clock gating, power gating and frequency scaling. The state selection and control is available as API hooks to the user to build an application algorithmic flow according to his/her power budget.
Many of the advanced features in Wearable and Mobile platforms like context-aware, environment-aware, or location-aware rely on growing number of ‘always-on’ sensors. The application and demands for extremely low-power ‘always-on’ sensor technology is going to continue to increase in future.
Indea’s family of SoCs delivers one of industry’s lowest power solution for always-on sensing. It uses a patented IP, called Policy Manager to sense and monitor data continuously from a variety of sensors while keeping the main CPU in deep sleep state. Once configured, the Policy Manager handles sensing, monitoring and conditioning of sensor data allowing the main CPU to reside in lowest power state. Combined with HCA, the Policy Manager delivers extreme low power continuous sensing solution enabling a new level of Wearable and IoT “ambient intelligence”.
New and Emerging applications in wearable and IoT space needs diversity of processors to deliver the best-in-class performance and battery life. Any multi-core system is only as effective as the software's ability to take advantage of parallelism, and ability to effectively map applications needs to individual processors.
Ineda’s family of SoCs provides a secure and highly effcient inter-processor communication infrastructure using CPU Link services. The CPU Link is specifically designed to deliver low latency and low overhead communication between processors with diverse performance and memory resources. It uses fast interrupt mechanism to implement low latency for short messages and synchronization. It utilizes shared memory to achieve low overhead (aka “zero-copy”) so data passes by reference rather than physically being copies. Since resources and peripherals are shared between multiple processors, a lightweight semaphore mechanism allows implementation of synchronization primitives to surround critical sections of code and ensure there is no resource contention.