Sensory Input

Any Device Sensors and Software are supported given they have interface to deal with.

The most basic sensor of all only have 2 stage, either on or off, true or false. Example of these are pressure touch sensor, audio streams, etc.

Sensors are mainly designed to mimic real senses.
In the end, all of them will be converted to binary format for storage, thus if binary can represent all these data, our mind must be able to too.


Flow

Sensor data are directly ported/made available for fetching to pattern processing layer for processing using ICL.

ICL will process all incoming raw sensory input and select whether to use specialized ICL depending on their type.

Visual and audio data are encoded in such a special format that it cannot be processed using the default ICL implementation designed for general analog based sensor input without significant effort.

Therefore it must be provided with specialized ICL routines in order to extract and identify patterns from them separately, those algorithm are not required to be perfect, any open source recognition library can do the deal as long as they provide a comparing function.


Rationale

Any data in existence can be has the properties of wave, and thus can be described in waveform according to quantum mechanics .

No matter how complex it seems, a wave can be decomposed into multiple (or possibly infinite) simple sine waves according to fourier series .

Therefore for the universal solution it must possess the ability to ICL these basic waveform. As the decomposed sine waves are always analog, it is set as default.

Visual and audio signals although in theory can be decomposed into simple sine waves and be processed by the default ICL, doesn’t do that due to:

Thus specialized decoding and ICL scheme will be defined for them independently to boost learning performance.