1) From Ehcache -> MemCached -> MongoDb
One of our Client, I was working for, were using Ehcaching for caching shopping cart and other objects. Ehcache was runing on a seperate webserver and exposes rest API for retreving Objects. Each call to Ehcache Server was taking ~70 ms (best case) and ~150 ms (on average cases). The configuration was memory based but often gets evicted based on number of objects to be cahced. Disk persistence was not evicting, but was costly when the object is in disk.
CMS application is using MySQL for data persistency.
We looked @ NoSQL based solution, that can address both caching and persistency. MongoDB and Casandra were very close enough for our requirements. We used Morphia with MongoDB. The initial test we did on storing shopping cart objects (1 Million objects) into MongoDB took 7500 inserts/second and 4000 update/second. MongoDB was set up with replica set with 3 nodes. Retreiving a shopping cart object on a indexed column took ~2 mill seconds.
We extended the caching of Shopping Cart solution to storing actual Products (content objects). Product listing and product detail page is now much faster than MySQl solution. Product Listing is ~4 milliseonds and Product Detail is ~3 ms. Each product document is roughly 20 KB.
We looked @ Object to Document mapping tool and we had Morphia and Spring Data was in top list. Spring Data is in very early stage and the APIs ischanging/deprecating drastically upon each minor releases. Did some performance test b/w Spring Data and Morphia. Morphia outstood in performance.
At Onsale.com, we used Memcached for Home page with page cache and MongoDb for Getaways Page with only data cache. Home page response time ~300-500 ms where as Getaways page takes consistently 100-115 ms.
2) Scalable Server to Server Calls:
We have used HttpClient from Apache, for communication between servers. HttpClient is a blocking API that makes the calling thread to wait for the response. This stops the application client scalable beyond hundreds of connection. We turned towards Asynchronous IO (NIO) to achieve the same functionality. Using the raw raw API available in JDK is little more work. Netty provides Http Client API that is non blocking based on NIO. This not only improved the performance, but it helped in scaling our application beyond several hundreds of server to server connections. The calling thread is not blocked for teh response, rather it proceeded with further computaion while it was awaiting the response.
3) Webservice calls from Client Application:
We cannot cache the web service handlers in applications that are multi threaded. The reason being that the same handler if used by multiple thread at the same time to call different method (function), we get error. This was tested on client web application running on Jboss 4.2.2. But at the same time creating new Handler is also costly in terms of time (~2-5 seconds) and memory (each handler was consuming 30MB). The balanced solution we ended up with caching the handler in a concurrent hash map (key as thread name and value as webservice handler).
facade = facadeMap.get(Thread.currentThread().getName()).get();
ListingBeanService service = new ListingBeanService(
new QName("http://consumer.techiesinfo.com/", "ListingBeanService"));
facade = service.getListingBeanPort();
facadeMap.put(Thread.currentThread().getName(), new SoftReference<ListingBean>(facade));
} catch (Exception e)
logger.fatal("getDLSFacade() : ", e);
throw new ServiceUnavailableException(e);
4) Scalable File modification listener:
Usefull for some kind of configuration or properties file changes.
a) JNotify Framework (http://jnotify.sourceforge.net/) jdk 1.4 and above.
Need .so file to be in path in case of Linux and .dll to be in path in case of windows. Sample code snippet:
5) How to redirect pages based on country:
We used ipinfodb.com for IP to country look up, but they allow us to use their API for free with a limitation that per second only two requests are allowed. If we make more than 2 request per second then our request will be queued which is not a good solution for high traffic sites. Also the country look up was taking ~75ms (best case). We implemented the solution with in our application which was serving in less than 57 micro seconds. Will post the code soon (once we go live)...