THE MAPGRID PROJECT : CACHING POWER -------------------------------------- AT PRESENT CACHING SUCH AS EHCACHE/MEMCACHE/REDISS CACHING OF DATA HAS NOT BEEN IMPLEMENTED, IN THE MAPGRID SESARCH API [ REST API SERVICE ]. HOWEVER IN THE FUTURE, WE HAVE AN OBJECTIVE/PLAN TO IMPLEMENT HIGH SPEED DATA CACHING AT THE REST API level. --------------------------------------------------------------------------------------------- The MAP SEARCH and TEXT SEARCH ENGINES - ALLOW SEARCH WITH 4 PARAMETERS. [ GREEN DROP DOWN + BLACK DROP DOWN + BLUE DROP DOWN + PAGE NUMBER ] THUS THAT ALLOWS US TO MAKE A SPECIAL CACHE PATTERN : # [ Green category + Black category + Blue drop down [ usa, japan, germany..] + Page number ] E.g [ Events + Tech/ENGG Event + USA + PAGE NUMBER 1 ] = A Cache pattern The cache object will store the THIS CACHE PATTERN in the LIVE MEMORY OF THE CACHING SERVER, along with the search data/value. WHEN A SEARCH QUERY IS INITIATED : THE CACHE PATTERN STORED in the MEMORY is checked First, for the search result. IF The search pattern exists in the cache, a Mongo Database Search Query CALL is not required. If, HOWEVER a Mongo call is required, the mongo db - data fetch is immediately cached to make the system, a High speed memory based caching system, ALLOWING all subsequent search queries to work, directly from the cache server, rather than hitting the backend mongo db database for each query . ---------------------------------------------------------------------------------------------- THIS makes The MAP GRID a extremely High speed/ HYPER SPEED Search System. If the application scales : We can easily store the Entire planets information/projections in the High Memory Cache server. TO SUPPORT THIS caching : There are many many high memory systems in the Market: AIX/POWER SYSTEMS - POWER 775 BLUE GENE Q - [ 24 Terabytes per Rack ]. ORACLE/FUJITSU : M32 [ 32 TERABYTE Memory in One rack ] With this LARGE AMOUNT of MEMORY, IN A single system image LEADS TO THE ARCHITECTURE WHERE, The entire VIASATF PLATFORM DATA CAN BE CACHED in MEMORY - LEADING TO A HYPER SPEED / SUB MILLI SECOND SEARCH SERVICE. FINALLY: We do understand that SUPER COMPUTING SYSTEMS Are expensive. If only small comodity servers are available due to BUDGET CONSTRAINTS - There are memory virtualization systems, to aggregate main meory into a SINGLE LARGE SHARED MEMORY.[ The firm that does that : is called SCALE MP ] ----------------------------------------------------------------------------------------------