Friday, August 29, 2008

Gilbarco Spa success case: In Scrum We Learn

During 2007-2008 Gilbarco SpA has benefited from the experience of the Scrum Training and Coaching offered by mondora SpA. Scrum and the Agile Approach expressed during these sessions strongly motivate project and development staff to stay committed to their objectives, improving quality and saving effort. Knowledge has been spread throughout the team avoiding specialty gatekeepers and promoting the culture of continual learning from mistakes made. This is where the Software Department of Gilbarco SpA is now going and how it is approaching its development work, iteration over iteration. From the start, adopting and adapting Scrum to its enterprise culture development staff management are already learning from the mistakes and making the entire development process more efficient and scalable. Currently there are 8 teams running and collecting experience from more than 40 Sprints of history (as Gilbarco says In Scrum We’ll Earn).

Wednesday, May 7, 2008

mondora is expanding with a new office



View Larger Map
The just born division of Software Development of mondora, is based in a new office based in Valtellina. Valtellina is a green valley where air is still breathable, life-style is still for humans, where you can see mountains, cows and other things that relax.

Our aims are to develop and maintain software with a fresh mindset; we're focusing on a culture on developing software different from the others where programming languages are a way to build software and are not a prerogative of each individual. We will focused in delivery high value added software there for a specific target: corporates that wants to produce revenue with low cost solution.

Office is now hosting 4 developers and the executive board, when present.

Office location is:
mondora.com
via Vanoni 59/A
Morbegno
Sondrio (Italy)

Sunday, April 27, 2008

Write your app, not billing code: collect and share Experiences!

Nowadays, development is done following a bottom-up direction where development focuses on writing, or better rewriting sub-systems and even more systems without leveraging experiences.
An experience is a practical contact with and observation of fact or events. The culture of thinking in the direction of Software As A Service, let development to be - ideally - committed and packaged in an Experience where all the behaviors of the implemented scenario are collected, stored and executed.
Experience is a sense's feature that relies on the SOA paradigm and collects in a single point all the information about what a system or a sub-system does.
Relying on the SOA paradigm this, in a sense scenario, means:

  • they are thought as a Service;
  • they declare a Service Level Agreement contract;
  • they declare what to do when there's no capacity to perform more requests
  • they are manageable through customizable web 2.0 console in terms of operational management and business management

Experiences let developers to focus on what primarily an application is doing: revenues. An experience must be described by a SLA contract that declares the different suffering algorithms of the experience and the logic on balancing on different experiences instance of the same experience family.

Suppose a company would like to sell goods on a target customer. They would like to try several different campaigns on the target proposition without loosing to much money.

Loosing money is for the Experience a SLA status and reflect a good wellness status of Gaining Money. The experience packages itself all the service, components and objects needed to provide the service. It declares the SLA status and some hooks that collect all the KPI (Key Performance Indicator) of the application that is evaluated by a business algorithm.
The business algorithm correlates of all the KPIs and instructs the platform the experience status.

Once installed and assimilated the experience starts pooled as an instance of the same family, and starts working having its status associated. When sense allocates it by thinking it is a valid experience for the current session calls, the Experience starts to collect KPIs associated to it and correlates them by pulsing the result in the Experience status grid.

KPI are real time propagated to console too.

Endnotes
What is inside an Experience is almost all, can be a Service or an external application that turns in a business service level agreement scenario. Experiences are portions of knowledge that satisfies a custom business behavior in a SOA oriented fashion.
Experiences can be identified like ACID portions of knowledge. They are Atomics, Consistent, Isolated, and the providing business is Durable.

Tuesday, April 22, 2008

Sense Virtual Appliance on Amazon EC2

Sense is a platform that can deploy Software having the Service Level Agreement between object calls. It doesn't matter if objects are locally, distributed on a grid or are outside the system. Sense implements the pure concept of SaaS (Software As A Service). We moved straightforward, and we have demonstrated that Hardware should be considered as Software. We've created a Scent that attaches Virtual Hardware on a Sense Federation and use Hardware as a Service by letting the system scaling on hardware if hardware is more available under a certain business constraint.

Now mondora's engineers have a new concept in Sense Packaging; they're thinking to provide a package that is a ready for production Sense Instance or Federation deployable as a Virtual Appliance on Amazon EC2 Services.

The main feature of the Sense Virtual Appliance, over the already available feature in Sense, is the perception of the hosting platform (in the case EC2), and then the capability to scale on that platform when Business Service Level Agreement are violated. This let to spend money only when they need to be spent, and to configure each business policy for each application.

The feature will be released in the next month, and will be available as a Package Option of Sense.

Monday, April 7, 2008

Google Sense Toolkit: reliable client side application


The integration of Google Web Toolkit is done in sense by implementing a communication layer with JSON. This let that all the incoming request now can be response not only with XML-RPC, HTTP, Web Services, IIOP or JMS but can be done via JSON. This feature enables all the AJAX platform that would become reliable; both on the client having a client doing things asynchronous and having a high available Service infrastructure managed by sense. By implementing an offline technology such as Google Gears, lets to manage Service Level on network connection too, having an operational environment every time ready and when occasionally available (due to network instability) works with Server System such as the sense Infrastructure. Google Web Toolkit enable developers to write Client Application in Java and the to post compile them by producing Javascript. When developing with GWT, the mindset is similar to the AWT’s one; developers work only with java object and the Google post compiler generates the right code for browsers. sense is implementing the infrastructure to work with Google Web Toolkit and can generate a set of object representing the hosted Services to run at the client Side. A developer writes a Service, declares its SLA policy, and generates the corresponding code invokable in the GWT domain. sense GWT implementation bridge the communication gaps between sense and the corresponding objects on the client.


Standard Asynchronous Calls

Asynchronous Calls as AJAX identifies where the client is sending a request to sense and receive a response near-time. On the client side is easy to develop a system that uses generated sense’s stub services and use them, delegating to the system to make the call. This is normal, as GWT does; sense’s services are extended for being called from the front end and Google sense Toolkit extension provides the feature of Marshall and un-marshall calls between sense and the client. In sense each service, bizflow or Feeling has a small family of helper interfaces and classes. Some of these classes, such as the service proxy, are automatically generated behind the scenes and you generally will never realize they exist on the Client. Just bring their source while developing the client and start using it letting sense’s calls invisible. Differently than GWT requirements, sense not requires for each implementation a service implementation, just use the sense’s created service stub on the client and delegate it to the sense client library compiled as Javascript for in-browser elaboration.

Event Driven Calls

Comet like calls, where client subscribes for receiving a set of notification when available on the Server and sense notifies to the Client the proper data. Having a comet approach is easy to implements, for example, a dynamic graph that changes when something on the server changes. sense graph implementation is based on Chronoscope. The data is delivered by sense over a single, previously-opened connection. This approach reduces the latency for data delivery significantly. The architecture relies on a view of data which is event driven on both sides of the HTTP connection. It suites really well in SOA architecture where the only substantive change is that the endpoint is the browser. While Comet is similar to Ajax in that it’s asynchronous, applications that implement the Comet style can communicate the state changes with almost negligible latency. This makes it suitable for many types of monitoring and multi-user collaboration applications which would otherwise be difficult or impossible to handle in a browser without plug-ins. sense provides a Comet-ready module available to developers on the bleeding edge implementing all the best IO Patterns and Practice making Application HI Scalable. sense is making the event-driven future a present reality.

Endnotes
Adopting sense as High Performance Computing environment, let to be sure that everything on the server side is addressed and performance is realtime monitored. The marriage between sense and GWT and Google Gears alike technologies, shifts to another paradigm in software computing where network availability is considered as a possible system fault and managed.

Friday, April 4, 2008

BEYOND SOA IS SPOA


IT market in general - and software vendors in particular – have emphasized SOA paradigm (Service Oriented Architecture) being the silver bullet for actual and future software developments. Recent studies (e.g.: “SOA and BPM for Enterprise Applications: A Dose of Reality”, AMR Research Inc, May 2007), show how SOA is perceived/applied within enterprises, and what directions enterprises themselves should take in order to gain advantages from its adoption. Whether an enterprise is going to plan for SOA design or not, there are three major difficulties they have to deal with:
  • Roles involved. Even when designing simple business services, many company roles are involved, therefore many skills play on the same ground using possibly different tools
  • Implementation model. Being it either Waterfall, Synchronized or Single, a certain number of tools is required to support it. Even if marked with the same brand, platforms for implementing SOA are often a collection of different tools, sometimes not well integrated
  • Organizational issues. They are much more complex than technical ones, and they often require a cultural change in order to be addressed
Sense software platform leverages on its SPOA (Single Point Of Accountability) key concept to go one step beyond SOA and address the above points. This paper illustrates how Sense implements SOA paradigm and how SPOA model offers a way to facilitate the cultural change needed to better exploit technology benefits.

A real-life scenario
The complexity underneath the development of a business service does not depend on the service itself. It depends rather on the complexity (organizational, cultural) of the company. Incredibly simple requests may require long time and a great amount of money to be accomplished.

Roles involved
Within a big company, many distinct roles have to participate to the entire process, from inception (business modelling) to the execution (operation). All roles belong to three major areas:
  • Business. Basically the business owner and the business analyst refer to this area. They are responsible for high level description of business goals and for detailed description of business flow, data flow and user interaction
  • Architecture. The technical architect translates business concepts into executable units with algorithms to be developed and/or systems to be used
  • Operation. One or more professionals are required to develop, test and deploy the service designed by the architects. An additional role is required to monitor service execution and ensure its performances
From 3 to 7 (or even more) different roles are required to implement a business service. The number cannot be smaller and in a normal situation this results in delays, misunderstandings, different (sometimes incompatible) point of views. SPOA model (as described later) does not allow to decrease the number of roles, rather it offers an approach to service development that avoids most of the problems mentioned above.

Implementation model

There are three major models that companies use to control service lifecycle. Namely, they are the Waterfall Model, the Synchronized Model and the Single Model, as depicted in the picture below.

The Waterfall Model is a one way process (from business to operation) where every role uses its own tool/repository. The constructs of each phase are passed to the following one. Any change in any phase has to be manually managed.









In the Synchronized Model all roles belonging to Business and Architecture share a common part of the model, thus allowing a lot of information to be consistently maintained through the lifecycle. Actual benefits of this approach do depend on tool features.











The Single Model is the best one in terms of simplicity. One single model is shared among roles, and each one of them may use specialized tools, without affecting the model itself.





Organizational issues
The bigger the company, the bigger the number of roles/groups involved in the development lifecycle. It is not infrequent to bump into difficulties like misunderstandings and delays. In other words, different perspective or different processes that take part in the overall design may contrast one another.



BEYOND SOA IS SPOA!



The actual implementation of SOA within enterprises has to face the above problems, making it difficult to achieve the benefits of SOA. Sense – SENsitive Services Environment fully implements SOA and goes beyond it, with the concept of SPOA – Single Point Of Accountability.

The major concern of SOA is the architecture.
It has to deal with services in a technical perspective. Services, in IT world, are merely responses to invocations. No more than results of algorithms. Or, from a higher point of view, they are objects running on an underlying middleware. This has little to do with clients and customers, who think of services as real answers to real requests.

The major concern of SPOA is the business service. Whatever middleware is responsible for making objects running, in customer perspective the service is (for example) his/her balance displayed on his/her mobile phone, at any hour, with a good response time.
It is the business service (that is, the customer) to set the rules that will drive development, and those rules will live inside the service as long as it is running.

With Sense the service is a single point of accountability, from business design to operation execution. The more the design proceeds, the bigger the number of perspectives that converge on the service. Every perspective “added” on top of the service will result in a new Service Level Agreement (SLA) that represents and guarantees the goals of that particular perspective.

Sense implements the Single Model of development, letting every role involved in the entire process add its own information and set its own constraints. Even if a certain role uses its own tool (not integrated with the others), this will result in adding a SLA (Service Level Agreement) on the service, and that SLA will affect service execution over its life.

Sense and SPOA help companies solve their organizational issues. Software platforms, including Sense, will never be the answer to the organizational issues described, that mainly depend on the behaviour of individuals.

Nevertheless, Sense and its SPOA key concept bring a really new approach to development.
It focuses on cooperation: multiple roles converge on the same object, everyone of them being responsible (accountable) for its own perspective (SLA), while the object remains the same. It is a cultural change, that leads to improvement and higher awareness.

CONCLUSION

Looking at future plans for software developments, companies think of SOA as the paradigm to protect IT investments and exploit all the potential of information systems. When approaching SOA at enterprise level, three categories a problems arise: the number of roles involved, the development model and the organizational issues. All these seem to weaken SOA potential.

Sense - SENsitive Services Environment platform and its SPOA key concept, offer a way to build SOA applications and realize the promises of the paradigm. Instead of adding a new set of tools, SPOA redefines the concept of service, allowing it to escape from the cages of technical issues and go closer to who is the source of company profits, that is the customer. Once redefined, the business service collects perspectives over its design process, each one reflecting the point of view of the role that set it.

The task of redefining the service is both technical and cultural. In this way Sense introduces a different approach, that brings a decrease in development time, a more effective process and a better understanding of company goals. In Sense environment, it is called Enterprise Common Sense.

Monday, March 31, 2008

mondora releases the Oblique Scaling™

Is there a missing piece between Virtualization, Management, and Application stack? Yes. Applications speak languages not understandable from Virtualizaton platforms resulting fully unmanageable.


Applications cannot rise to the world they're suffering and everything is managed only when system is broken. The only things IT managers can do is restarting machines, switching on cpu and other things. A new approach is to monitor the system trying to anticipate suffering by monitoring CPU and other stuff.

This approach is making some Sense in Operational Management but does not making Sense in terms of business, because it tries to predict on some hardware collected data against what are real business needs.

This approach is a bottom up approach where Hardware drives Software and then Business. Hardware is like Software a vehicle for helping business and not a driver.

Imagine if a hardware producer wants to sell "normal power" and "extra power" in a startup company. "Normal power" is the basic expectation in terms of business: what will be really the next days Business; "extra power" is something that is unpredictable and everyone look for it (a big success on a campaign etc.).

This is the way! The preferred approach is to let Business to drive how the application is suffering and then choose the right policy to make it profitable.
This approach is preferred in term of Return Of Investment where the Application and Hardware are considered as a unique infrastructure that is adapting to business needs and not conversely.

Sense is a platform that is able to describe how a system is performing during time and to switch over Application nodes by anticipating possible faults. During time, Horizontal Scaling cannot be enough; the Application needs more Power by Vertical Scaling, and usually we have available "time slot" of cpu times.

In mondora, Horizontal and Vertical Scaling is handled as an Application Feature crosscutting the business behaviors. Like High Available Services available in the System Federation, of a SOA System, with different Protocol, and different Service Level Agreement configuration, mondora enable your application to perceive Hardware as an external service itself. Considering it as a High Available Service Hardware can be instrumented with a Service Level Agreement specification and self tuned, meeting business best value, while business is computing.

Sense implements Several levels of Service Statuses; this let the system to predict when is overloading and choose which is the best strategy to let the system survive.

This is done prior that a fault occurs letting the application to feel if things are going badly in terms of business.

Different strategies can be implemented when choosing, from the business point of view, how to scale.

Business is driving and mondora has produced several implementations.

For example:
- Worst Performance, Increase Power
- Latency vs Operational Time

All the above strategies are implemented and deployed -plug-ins at runtime- as Scent that enables to switch on (and off) only the needed power to let the application run meeting SLA.

This is like the common Virtualization layer, but adds more: add rule from the business to address saving; every type of saving, from energy power saving to operational costs saving. Requirements to get saving shall have hardware that enable powering on/off parts, without switching buttons. Sense will drive your saving, to meet declared SLA.

Traditional Virtualization is under the hood and is perceived by the Application, this let Application that makes Sense to Vertical Scale on several platforms.

for further information about oblique scaling contact sense@mondora.com

Sunday, March 23, 2008

Openldap and HP UX: industry strength solution

Enterprise computing requires industry strength behavior that can survive across a big amount of data and an increasing overload.

The Open source is sometimes not appreciated because it is thought as "not ready" for the Enterprise Era.  
In these days, I and a good fellow Paolo are testing Openldap for performances in an Enterprise Architecture.
Stress Task goals are given with a short anticipation, last Thursday, and results must be produced in a short time: next Monday at 12.00!
I love to approach everything having those objectives:
- learn something new;
- produce something valuable;
- enjoy meanwhile, stressed.

Starting from the "learning something new," what's better if you have two HP machine partitioned on four VPAR having on each machine 64GB of memory and 300GB of disk. Ok HP UX is not so easy to be understood as linux is, but having Bash on it, it becomes easier.

I like doing a job in pair, because:
- knowledge transfer is facilitated;
- stress is propagated.

Our client configuration for processing huge LDIF file (millions of entries) is a Mac Pro 8core, and our client machines to work with are our Mac Book Pro; a 17" and a 15."
To better see what's going on, an Apple 30" Cinema Display is connected to the 17" MacBook and to have each client interface on each client interface we:
- connected an external keyboard and mouse, through the Cinema Display, having a user to work together on the same computer;
- shared thought VNC the 15" macbook, that is the client for doing documents, writing wikis browsing the net to find useful ldap PATCHES.

In terms of software, after a quick look over the net, we've chosen SLAMD as our helper in doing stress tests.
SLAMD is a Sun iPlanet opensource implementation thought mainly to do Jobs on an LDAP server. It is a Web Application, written in Java, and runs in a Web Container like Tomcat. We configured SLAMD on a virtual partition near the master ldap server.
SLAMD is a good tool for doing Stress tests with much native feature that, in our case, are covering all the test cases. We've chosen this tool because time to market of our stress test report is really short. In three days we have to configure the platform, run differently kind of tests and produce hi quality report. And SLAMD is a tool that gains lots of our requirements:

1. It is strongly LDAP oriented
2. It is fully web administrable and configurable
3. It organizes distributed client over the net in the Grinder fashion, with the Idea of Distributed Computing Agents
4. Jobs can be scheduled during time with simple clicks
5. It implements a report engine that exports pdf, html, text report having graph and statistical information collected
6. It's easy to install, just untar and launch (in the tomcat edition)


During stress testing, system has been monitored having Glance running on each single VPAR, monitoring CPU’s performances. HP GLANCE fulfills the need for a simple to use, simple to understand performance product that examines what is happening now on a system. It displays the usage of the three critical system resources (CPU, Disc, and Memory) from a system wide point of view and then highlights programs which are being a run that is of special interest. HP GLANCE allows you to isolate a particular job, session
or process for additional detail, if desired.


Testing approach

Tests were conducted to observe how system works under different load, measuring the average response time, peaks and other useful information.
Since the first test the above architecture, having Master, and Slave connected is respected. The main statement in doing tests’ run is that: it doesn’t matter if a test relies only to a component inside the architecture; it shouldn’t be isolated while stressing atomically it. This to guarantee and to monitor not only the single component but to watch a single component in a complex system.
Variables during testing are a multitude of information and span across:
• time of testing;
• number of operations;
• when and where execute operations;
• level of concurrency;
• level of randomness;

Variables during testing shift with these criteria:

VariableStarting fromMoving toComments
Users1 million2 millionsfirst system setup
Users2 million6 millionsstress testing
Stress Period10 minutes60 minutesinitial load
Stress Period60 minutes180 minuteswork load
Stress Period180 minutes720 minutesendurance test
Thread per client15Endurance stress test


Testing has been approached thinking concurrent client with a factor level for update, delete, search and add.

The concept of User is a set of complete Entries from the inetorgperson.schema.

1st benchmark test: 1 million user - 3 concurrent slave

This test has been conducted to check the whole infrastructure having a Master under write stress and slave to read the just replicated information.


ConfigurationValues
writes per seond20 tx/sec
Number of slaves3 slaves
Shared memory4 GB
Customer Base1 million
Replication mechanismslurpd
Test time10 minutes



Test has produced:
average search response 0,6 msec

Having Shared Memory:

Text RSS/VSS:2.5mb/3.8mb
Data RSS/VSS:1.7gb/1.9gb
Stack RSS/VSS: 32kb/ 32kb
Shmem RSS/VSS:3.3gb/4.0gb
Other RSS/VSS:9.5mb/152mb





Test Considerations
Overall has been done 11.786 ADD on the master node, and has been verified that the propagation on the 3 slaves throught Slurpd daemon went successfully. During Slave’s synchronization, 3 different jobs were stressing those 3 slaves reading and having a performance of 0,6msec each transaction, doing Random Search.


2nd Benchmark test: 2 million user - 3 concurrent slave

This test has been conducted to check the whole infrastructure having a Master under write stress and slave to read the just replicated information.



ConfigurationValues
writes per seond20 tx/sec
Number of slaves3 slaves
Shared memory4 GB
Customer Base2 million
Replication mechanismslurpd
Test time10 minutes



Test has produced:
average search response 0,592 msec

Having Shared Memory:

Text RSS/VSS:2.6mb/3.8mb
Data RSS/VSS:2.0gb/2.2gb
Stack RSS/VSS: 32kb/ 32kb
Shmem RSS/VSS:2.7gb/4.0gb Other RSS/VSS:9.2mb/111mb






Test Considerations

Overall has been done 2.850 ADD, 3084 DELETE, 5875 UPDATE on the master node, and has been verified that the propagation on the 3 slaves through Slurpd daemon went successfully. Meanwhile The Master replicated to slaves, 3 different jobs were stressing 3 slaves reading having a performance of 0,592 msec each transaction, doing Random Search.





3rd Benchmark test: 6 million user - 3 concurrent slave - 10 mins run 1:2:1

This test has been conducted to check the whole infrastructure having a Master under write stress and slave to read the just replicated information.



ConfigurationValues
writes per seond20 tx/sec
Number of slaves3 slaves
Shared memory16 GB
Customer Base6 million
Replication mechanismslurpd
Test time10 minutes



Test has produced:
average search response 0,485 msec

Having Shared Memory:

Text RSS/VSS:2.5mb/3.8mb
Data RSS/VSS:5.6gb/6.6gb
Stack RSS/VSS: 32kb/ 32kb






Test Considerations

Overall has been done 206.614 ADD, 224.583 DELETE, 427.186 UPDATE on the master node, and has been verified that the propagation on the 3 slaves through Slurpd daemon went successfully. Meanwhile the Master replicated to slaves, 3 different jobs were stressing those 3 slaves reading and having a performance of 0,485 msec each transaction, doing Random Search.





4th Benchmark test: 6 milion user - 3 concurrent slave - 10 mins run 10:1:1:1

This test has been conducted to check the whole infrastructure having a Master under write stress and slave to read the just replicated information.



ConfigurationValues
writes per seond20 tx/sec
Number of slaves3 slaves
Shared memory16 GB
Customer Base6 million
Replication mechanismslurpd
Test time10 minutes



Test has produced:
average search response 0,488 msec

Having Shared Memory:

Text RSS/VSS:2.5mb/3.8mb
Data RSS/VSS:5.6gb/6.6gb
Stack RSS/VSS: 32kb/ 32kb






Test Considerations

Overall has been done 3.975 ADD, 3.977 DELETE, 3.817 UPDATE on the master node, and has been verified that the propagation on the 3 slaves throught Slurpd daemon went successfully. Meanwhile The Master replicated to slaves, 3 different jobs were stressing those 3 slaves reading and having a performance of 0,488 msec each transaction, doing Random Search.





5th Benchmark test: 6 million user - Random search

This test has been conducted to check the whole infrastructure having a Slave under read test on a random cluster.



ConfigurationValues
writes per seond20 tx/sec
Number of slaves3 slaves
Shared memory16 GB
Customer Base6 million
Replication mechanismslurpd
Test time10 minutes
Random Cluster Size
1 million



Test has produced:
average search response 0,436 msec

Having Shared Memory:

Text RSS/VSS:2.5mb/3.8mb
Data RSS/VSS:5.6gb/6.6gb
Stack RSS/VSS: 32kb/ 32kb
Shmem RSS/VSS: 11gb/ 16gb
Other RSS/VSS: 10mb/160mb




Test Considerations
Overall has been done 1.357.628 on the slave node against a cluster of a 1.000.000 random users.


6th Benchmark test: 6 million user - Update - Modify - Delete - 600msec

This test has been conducted to check the whole infrastructure having a Master under write stress and slave to read the just replicated information.



ConfigurationValues
writes per seond20 tx/sec
Number of slaves3 slaves
Shared memory16 GB
Customer Base6 million
Replication mechanismslurpd
Test time10 minutes



Test has produced:
transaction per second: update: 236,4
transaction per second: add: 236,877
transaction per second: delete: 236,4


Having Shared Memory:

Text RSS/VSS:2.5mb/3.8mb
Data RSS/VSS:5.6gb/6.6gb
Stack RSS/VSS: 32kb/ 32kb
Shmem RSS/VSS: 11gb/ 16gb
Other RSS/VSS: 10mb/160mb







Test Considerations

Overall has been done 141.081 ADD, 141.160 DELETE, 141.021 UPDATE on the master node, and has been verified that the propagation on the 3 slaves throught Slurpd daemon went successfully.







7th Benchmark test: 6 million user - 10:1:2:1- 12 hours

This test has been conducted to check the whole infrastructure having a Master under write stress and slave to read the just replicated information.



ConfigurationValues
writes per seond20 tx/sec
Number of slaves3 slaves
Shared memory16 GB
Customer Base6 million
Replication mechanismslurpd
Test time12 hours



Test has produced:
transaction per second: search 1984,697
transaction per second: update 9,890
transaction per second: add 4,991
transaction per second: delete 4,991


Having Shared Memory:

Text RSS/VSS:2.5mb/3.8mb
Data RSS/VSS:5.7gb/6.8gb
Stack RSS/VSS: 32kb/ 32kb
Shmem RSS/VSS: 11gb/ 16gb
Other RSS/VSS: 11mb/168mb







Test Considerations

Overall has been done: Search: 67.070.660, Modify: 427.186 (49.766%); Delete: 224.583 (26.163%); Add: 206.614 (24.070%) on the master node, and has been verified that the propagation on the 3 slaves throught Slurpd daemon went successfully.






8th Benchmark test: 6 million user - 10:1:1:1- 1 hours with concurrency only on master

This test has been conducted to check the whole infrastructure having a Master under write stress and slave to read the just replicated information.



ConfigurationValues
writes per seond20 tx/sec
Number of slaves1 master
Shared memory16 GB
Customer Base6 million
Replication mechanismslurpd
Test time1 hour



Test has produced:
transaction per second: search 969,444
transaction per second: update 96,760
transaction per second: add 97,077
transaction per second: delete 96,967


Having Shared Memory:

Text RSS/VSS:2.5mb/3.8mb
Data RSS/VSS:5.7gb/6.8gb
Stack RSS/VSS: 32kb/ 32kb
Shmem RSS/VSS: 11gb/ 16gb
Other RSS/VSS: 11mb/168mb








Test Considerations

Overall has been done: Search: 67.070.660, Modify: 427.186 (49.766%); Delete: 224.583 (26.163%); Add: 206.614 (24.070%) on the master node, and has been verified that the propagation on the 3 slaves throught Slurpd daemon went successfully.







9th Benchmark test: 6 million user - 10:2:2:2- 1hours with concurrency only on master

This test has been conducted to check the whole infrastructure having a Master under write stress and slave to read the just replicated information.



ConfigurationValues
writes per seond20 tx/sec
Number of slaves1 master
Shared memory16 GB
Customer Base6 million
Replication mechanismslurpd
Test time1 hour



Test has produced:
transaction per second: search 678,123
transaction per second: update 135,643
transaction per second: add 135,703
transaction per second: delete 134,770


Having Shared Memory:


Text RSS/VSS:2.5mb/3.8mb
Data RSS/VSS:5.7gb/6.8gb
Stack RSS/VSS: 32kb/ 32kb
Shmem RSS/VSS: 11gb/ 16gb
Other RSS/VSS: 11mb/168mb


Test Considerations

Overall has been done: Search: 67.070.660, Modify: 427.186 (49.766%); Delete: 224.583 (26.163%); Add: 206.614 (24.070%) on the master node, and has been verified that the propagation on the 3 slaves throught Slurpd daemon went successfully.







10th Benchmark test: 6 million user - Random search

This test has been conducted to check the whole infrastructure having a Master under write stress and slave to read the just replicated information.



ConfigurationValues
writes per seond20 tx/sec
Number of slaves1 master
Shared memory16 GB
Customer Base6 million
Replication mechanismslurpd
Test time1 hour



Test has produced:
average search response: 1.686 msec

Having Shared Memory:

Text RSS/VSS:2.5mb/3.8mb
Data RSS/VSS:5.6gb/6.6gb
Stack RSS/VSS: 32kb/ 32kb
Shmem RSS/VSS: 11gb/ 16gb
Other RSS/VSS: 10mb/160mb





Test Considerations

Overall has been done 17.976.344 on the slave node against a cluster of a 1.000.000 random users.
Delete: 224.583 (26.163%); Add: 206.614 (24.070%) on the master node, and has been verified that the propagation on the 3 slaves throught Slurpd daemon went successfully.



Conclusion

Openldap has performed really well on the HP UX Cluster environment; during all the test phase no restart of Slapd has been required and no restart of Slurpd has been done.

Openldap and HP UX, has been used AS IS and is not tuned for better performing. Openldap has been patched with the Official patches available online.

VPAR configuration has been tested while Openldap was running having Core hot enabled during run time.