MemeStore - a Design

The MemeStore concept consists in using traits, as in personality traits, in the storing of data or memories, to build a knowledge base as a weighted graph. The design of the MemeStore is targeted towards multicore computers. Most of the cores are dedicated to memory store bots.

MemeStore - a Design on Multicore Computers

The MemeStore concept is based on the way humans and animals store memories.
MemeStore is an adaptive knowledge base, which may be given a personality, through
the use of personality traits.

The whole concept is based on animals who evolved, with the imperative to survive and
reproduce. These imperatives are controlled through hormones and memories, creating
attractions and aversions -- related to danger and safety, danger being anything to do with
predation or any other type of harm; and safety being food, shelter, play, and reproduction --
which are stored as memories, and then drive certain types of behaviour; or predispose
the animal to certain types of behaviour -- see Memestore - a Knowledge Base.

Memestore running on a 30/40 Core Computer with 64G of Memory

18/28 graph traversal bots with one per core.
1 million word/node knowledge base.
500/1000 bytes/node =>
1 Gigabyte - Maximum 10G.
1 Gigabyte for visualization data - Maximum 10G.
1 Gigabyte as working scratch memory - Maximum 10G.
12 Memestore Modules:
        graph traversal,
        character recognition,
        semantic command mapper,
        memory trails,
        memory stores,
        personality traits/attributes,
        reward system,

Module 1 - Graph Toolkit

Graph library to build the knowledge base.
Granular locks are used to lock sections of the graph.
        word(char,64), concept(uint),
        locale(uint), type(uint/isa,hasa,etc.),
        weight(uint), like(uint), dislike(uint),
        vertex(uint,200/1000), search-vertex(uint)

Module 2 - OCR and Vision

A way to read characters using a character recognition engine (OCR),
The vision part of this module, consist of a hardware and software system,
which is able to identify living and non-living entities; as well as, perceive
and understand a living entity's behaviour and its reactions to stimuli.

Module 3 - Scanning/Reading

Updated content engine with dictionary reading rules.
Finite state machine generator using XML-based rule sets.
Read from encrypted databases and rule sets.
Use grammar, thesauri, and dictionaries.

Module 4 - Search Engine

Mapping of memory trails to user queries and scanned pages.
Dynamic updates of relevancy of scanned pages using signatures of memory trails.
Use XML-based rule sets.
The search module is based on our CETE engine.
Use grammar, thesauri, and dictionaries.

Module 5 - Semantic Command Mapper

Mapping of natural language commands to pseudo or hardware-based commands,
Use semantic command map with actor, patient, subject, etc..

Module 6 - Memory Stores

Access, copying, retrieval, update for both temporary and permanent stores.
Rules for increasing and decreasing weight/relevance/like/dislike of stored memory.
Data driven module with lexer and parser.

Module 7 - Memory Trails

A logging mechanism to store traversals of the knowledge base graph.
They are stored as signatures and can be overlayed on top of the graph,
without changing the graph, They are referred to as path-overlays.
Capability to retrieve a specific trail and its associations, see white-paper.

Module 8 - Query/Command

Natural language query module for accessing the memory stores.
This module relies on the OCR/vision and the reading/scanning module.
This module and the semantic command mapper may be joined.
Test by generating a natural language script, with t-script.

Module 9 - Personality Traits

A database of traits and their associated concepts, stored as a graph.
Link set of commands to execute, seen as observable behaviour, that a person could interpret.
Link concepts of character traits, aversions, attractions with stored memories.

Module 10 - Behaviour/Reward Engine

Increase/decrease of the relevance of stored memories, is based on behaviour,
which is observed or effected by the MemeStore. Several criteria are to be used,
which are: frequency, user reinforcement, etc..

The engine is built using propositional calculus; and modeled using graph theory, and
De Brujin sequences. A level of fuzzy is added, consisting of the two meters dealing with like and dislike.
The propositional calculus engine, of the MemeStore, can ultimately
be burned onto an integrated circuit. Our MemeStore PCG engine cannot be circumvented, by building a similar propositional calculus engine, using Bayes theorems, formal grammar or graph theory methodologies.

Module 11 - Conversation

Conversing with the
MemeStore relies on voice recognition and vision through
the use of a vision system, a speech recognition engine, and a speech engine.
The conversation module is built on top of the query/command module.

The OCR/vision engine, as well as, the speech recognition engine rely on
the scanning/reading or query/command module, as back-end to their function.
Both engines output text, that is fed into the scanning/reading module; which then
gauges, whether or not, the inputted text is meaningful; that is, relates to concepts
and notions that are understandable by the MemeStore.

The conversation module is modeled on the way most people carry a conversation.
They are constantly accessing their knowledge base, and weighing the relevance, of
the information, based on their personality profile. A simple example would be carrying
a conversation in the context of a reception. One would choose to start a conversation
with a person whose attire is attractive; and, then one would choose to continue or
terminate the conversation, depending on the relevance and attractiveness of what is
said by the other party. The manner in which a conversation is terminated again depends
on a person personality profile and the context of the situation -- experiential memories.

Module 12 - Visualization

A way to visualize the
MemeStore in action, by overlaying the memory trails in
a heat color-coded fashion, on top of a 3D sphere-like image. For search-engines,
the data is overlayed on top of a 3D view of the world, if the engine is world accessible.

Module 13 to N

Modules can be added to the
MemeStore to extend it, or build a humanoid robot.
Such modules are: a manipulation module, a data mining and analysis module,
a gaming module, a generic problem solving module, a learning by mimicry
module, etc..

The CETE Engine

After building a semantic network with natural language relationships between keywords;
the CETE search engine allows the following to be done:


  1. Index file-name of documents for path specification (path-spec) queries.
  2. Index keywords found in pages and documents for keyword queries.
  3. Content analyze the unstructured text in the pages and documents, using our statistical natural language processing (NLP) approach.
  4. Build signatures of every set of extracted keywords and their relationships. These signatures are called semantic signatures.
  5. Build signatures of the path traversed, by every set of extracted keywords, in the semantic network. These signatures are called network path signatures.
  6. Associate semantic and network path signatures with scanned pages and documents.
  7. Sort and log the signatures for retrieval of scanned pages and documents.

Search Queries
  1. Make searches using path-spec  (file-name keywords) queries.
  2. Make searches using keywords only (i.e. clustering legacy way).
  3. Make searches by traversing the semantic network looking for relationships between query keywords.
  4. Build a graph of the relationships between query keywords (semantic signatures).
  5. Extract network path signatures from these semantic network traversals.
  6. Sort and log network paths and semantic signatures for  retrieval of scanned pages and documents.
  7. Track user behaviour (desktop clicking, voice, eye movement, etc.) to modify the strength of these network paths.
  8. Return results by comparing the query signatures with the stored network-path and semantic signatures.
  9. Return results with just the strengthened network paths which refer to files that were deemed to satisfy users.
  10. Build semantic network path overlays, using the extracted paths for visual feedback; for example, a different color (e.g. heat-coded), depending on how satisfied, users were with the results of the query; or how strong the relationships between keywords, and the concepts to which they relate, are.
  11. Build networks based on path-spec keywords that can be displayed to users interested, in what the collection of keywords they use, in specifying their file names, look like in a graph.

Patent Pending